00:00:00.001 Started by upstream project "autotest-per-patch" build number 132702 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.128 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.129 The recommended git tool is: git 00:00:00.129 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.187 Fetching changes from the remote Git repository 00:00:00.190 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.226 Using shallow fetch with depth 1 00:00:00.226 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.226 > git --version # timeout=10 00:00:00.258 > git --version # 'git version 2.39.2' 00:00:00.258 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.283 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.283 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.700 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.709 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.720 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.720 > git config core.sparsecheckout # timeout=10 00:00:06.732 > git read-tree -mu HEAD # timeout=10 00:00:06.746 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.767 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.767 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.853 [Pipeline] Start of Pipeline 00:00:06.867 [Pipeline] library 00:00:06.868 Loading library shm_lib@master 00:00:06.868 Library shm_lib@master is cached. Copying from home. 00:00:06.884 [Pipeline] node 00:00:06.899 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:06.901 [Pipeline] { 00:00:06.912 [Pipeline] catchError 00:00:06.914 [Pipeline] { 00:00:06.926 [Pipeline] wrap 00:00:06.932 [Pipeline] { 00:00:06.938 [Pipeline] stage 00:00:06.939 [Pipeline] { (Prologue) 00:00:06.952 [Pipeline] echo 00:00:06.953 Node: VM-host-SM38 00:00:06.957 [Pipeline] cleanWs 00:00:06.965 [WS-CLEANUP] Deleting project workspace... 00:00:06.965 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.971 [WS-CLEANUP] done 00:00:07.175 [Pipeline] setCustomBuildProperty 00:00:07.248 [Pipeline] httpRequest 00:00:07.546 [Pipeline] echo 00:00:07.548 Sorcerer 10.211.164.20 is alive 00:00:07.555 [Pipeline] retry 00:00:07.557 [Pipeline] { 00:00:07.565 [Pipeline] httpRequest 00:00:07.569 HttpMethod: GET 00:00:07.570 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.570 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.584 Response Code: HTTP/1.1 200 OK 00:00:07.585 Success: Status code 200 is in the accepted range: 200,404 00:00:07.585 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:24.950 [Pipeline] } 00:00:24.970 [Pipeline] // retry 00:00:24.979 [Pipeline] sh 00:00:25.267 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:25.282 [Pipeline] httpRequest 00:00:26.395 [Pipeline] echo 00:00:26.397 Sorcerer 10.211.164.20 is alive 00:00:26.407 [Pipeline] retry 00:00:26.409 [Pipeline] { 00:00:26.422 [Pipeline] httpRequest 00:00:26.427 HttpMethod: GET 00:00:26.427 URL: http://10.211.164.20/packages/spdk_3c8001115a059cce731e057cfad468237a63e206.tar.gz 00:00:26.427 Sending request to url: http://10.211.164.20/packages/spdk_3c8001115a059cce731e057cfad468237a63e206.tar.gz 00:00:26.455 Response Code: HTTP/1.1 200 OK 00:00:26.456 Success: Status code 200 is in the accepted range: 200,404 00:00:26.456 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_3c8001115a059cce731e057cfad468237a63e206.tar.gz 00:04:26.017 [Pipeline] } 00:04:26.036 [Pipeline] // retry 00:04:26.045 [Pipeline] sh 00:04:26.330 + tar --no-same-owner -xf spdk_3c8001115a059cce731e057cfad468237a63e206.tar.gz 00:04:29.666 [Pipeline] sh 00:04:29.950 + git -C spdk log --oneline -n5 00:04:29.950 3c8001115 accel/mlx5: More precise condition to update DB 00:04:29.950 98eca6fa0 lib/thread: Add API to register a post poller handler 00:04:29.950 2c140f58f nvme/rdma: Support accel sequence 00:04:29.950 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:04:29.950 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:04:29.968 [Pipeline] writeFile 00:04:29.982 [Pipeline] sh 00:04:30.270 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:04:30.284 [Pipeline] sh 00:04:30.579 + cat autorun-spdk.conf 00:04:30.579 SPDK_RUN_FUNCTIONAL_TEST=1 00:04:30.579 SPDK_TEST_NVME=1 00:04:30.579 SPDK_TEST_FTL=1 00:04:30.579 SPDK_TEST_ISAL=1 00:04:30.579 SPDK_RUN_ASAN=1 00:04:30.579 SPDK_RUN_UBSAN=1 00:04:30.579 SPDK_TEST_XNVME=1 00:04:30.579 SPDK_TEST_NVME_FDP=1 00:04:30.579 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:30.587 RUN_NIGHTLY=0 00:04:30.589 [Pipeline] } 00:04:30.603 [Pipeline] // stage 00:04:30.618 [Pipeline] stage 00:04:30.620 [Pipeline] { (Run VM) 00:04:30.633 [Pipeline] sh 00:04:30.919 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:04:30.919 + echo 'Start stage prepare_nvme.sh' 00:04:30.919 Start stage prepare_nvme.sh 00:04:30.919 + [[ -n 0 ]] 00:04:30.919 + disk_prefix=ex0 00:04:30.919 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:04:30.919 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:04:30.919 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:04:30.919 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:04:30.919 ++ SPDK_TEST_NVME=1 00:04:30.919 ++ SPDK_TEST_FTL=1 00:04:30.919 ++ SPDK_TEST_ISAL=1 00:04:30.919 ++ SPDK_RUN_ASAN=1 00:04:30.919 ++ SPDK_RUN_UBSAN=1 00:04:30.919 ++ SPDK_TEST_XNVME=1 00:04:30.919 ++ SPDK_TEST_NVME_FDP=1 00:04:30.919 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:04:30.919 ++ RUN_NIGHTLY=0 00:04:30.919 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:04:30.919 + nvme_files=() 00:04:30.919 + declare -A nvme_files 00:04:30.919 + backend_dir=/var/lib/libvirt/images/backends 00:04:30.919 + nvme_files['nvme.img']=5G 00:04:30.919 + nvme_files['nvme-cmb.img']=5G 00:04:30.919 + nvme_files['nvme-multi0.img']=4G 00:04:30.919 + nvme_files['nvme-multi1.img']=4G 00:04:30.919 + nvme_files['nvme-multi2.img']=4G 00:04:30.919 + nvme_files['nvme-openstack.img']=8G 00:04:30.919 + nvme_files['nvme-zns.img']=5G 00:04:30.919 + (( SPDK_TEST_NVME_PMR == 1 )) 00:04:30.919 + (( SPDK_TEST_FTL == 1 )) 00:04:30.919 + nvme_files["nvme-ftl.img"]=6G 00:04:30.919 + (( SPDK_TEST_NVME_FDP == 1 )) 00:04:30.919 + nvme_files["nvme-fdp.img"]=1G 00:04:30.919 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:04:30.919 + for nvme in "${!nvme_files[@]}" 00:04:30.919 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:04:31.181 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:04:31.181 + for nvme in "${!nvme_files[@]}" 00:04:31.181 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:04:31.181 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:04:31.443 + for nvme in "${!nvme_files[@]}" 00:04:31.443 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:04:31.443 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:04:31.443 + for nvme in "${!nvme_files[@]}" 00:04:31.443 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:04:31.443 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:04:31.443 + for nvme in "${!nvme_files[@]}" 00:04:31.443 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:04:31.443 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:04:31.443 + for nvme in "${!nvme_files[@]}" 00:04:31.444 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:04:31.705 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:04:31.705 + for nvme in "${!nvme_files[@]}" 00:04:31.705 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:04:31.966 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:04:31.966 + for nvme in "${!nvme_files[@]}" 00:04:31.966 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:04:37.278 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:04:37.278 + for nvme in "${!nvme_files[@]}" 00:04:37.278 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:04:37.278 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:04:37.278 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:04:37.278 + echo 'End stage prepare_nvme.sh' 00:04:37.278 End stage prepare_nvme.sh 00:04:37.291 [Pipeline] sh 00:04:37.577 + DISTRO=fedora39 00:04:37.577 + CPUS=10 00:04:37.577 + RAM=12288 00:04:37.577 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:04:37.577 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:04:37.577 00:04:37.577 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:04:37.577 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:04:37.577 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:04:37.577 HELP=0 00:04:37.577 DRY_RUN=0 00:04:37.577 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:04:37.577 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:04:37.577 NVME_AUTO_CREATE=0 00:04:37.577 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:04:37.577 NVME_CMB=,,,, 00:04:37.577 NVME_PMR=,,,, 00:04:37.577 NVME_ZNS=,,,, 00:04:37.577 NVME_MS=true,,,, 00:04:37.577 NVME_FDP=,,,on, 00:04:37.577 SPDK_VAGRANT_DISTRO=fedora39 00:04:37.577 SPDK_VAGRANT_VMCPU=10 00:04:37.577 SPDK_VAGRANT_VMRAM=12288 00:04:37.577 SPDK_VAGRANT_PROVIDER=libvirt 00:04:37.577 SPDK_VAGRANT_HTTP_PROXY= 00:04:37.577 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:04:37.577 SPDK_OPENSTACK_NETWORK=0 00:04:37.577 VAGRANT_PACKAGE_BOX=0 00:04:37.577 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:04:37.577 FORCE_DISTRO=true 00:04:37.577 VAGRANT_BOX_VERSION= 00:04:37.577 EXTRA_VAGRANTFILES= 00:04:37.577 NIC_MODEL=e1000 00:04:37.577 00:04:37.577 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:04:37.578 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:04:40.123 Bringing machine 'default' up with 'libvirt' provider... 00:04:40.696 ==> default: Creating image (snapshot of base box volume). 00:04:40.696 ==> default: Creating domain with the following settings... 00:04:40.696 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733426579_dc32ace64b4e737e317d 00:04:40.696 ==> default: -- Domain type: kvm 00:04:40.696 ==> default: -- Cpus: 10 00:04:40.697 ==> default: -- Feature: acpi 00:04:40.697 ==> default: -- Feature: apic 00:04:40.697 ==> default: -- Feature: pae 00:04:40.697 ==> default: -- Memory: 12288M 00:04:40.697 ==> default: -- Memory Backing: hugepages: 00:04:40.697 ==> default: -- Management MAC: 00:04:40.697 ==> default: -- Loader: 00:04:40.697 ==> default: -- Nvram: 00:04:40.697 ==> default: -- Base box: spdk/fedora39 00:04:40.697 ==> default: -- Storage pool: default 00:04:40.697 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733426579_dc32ace64b4e737e317d.img (20G) 00:04:40.697 ==> default: -- Volume Cache: default 00:04:40.697 ==> default: -- Kernel: 00:04:40.697 ==> default: -- Initrd: 00:04:40.697 ==> default: -- Graphics Type: vnc 00:04:40.697 ==> default: -- Graphics Port: -1 00:04:40.697 ==> default: -- Graphics IP: 127.0.0.1 00:04:40.697 ==> default: -- Graphics Password: Not defined 00:04:40.697 ==> default: -- Video Type: cirrus 00:04:40.697 ==> default: -- Video VRAM: 9216 00:04:40.697 ==> default: -- Sound Type: 00:04:40.697 ==> default: -- Keymap: en-us 00:04:40.697 ==> default: -- TPM Path: 00:04:40.697 ==> default: -- INPUT: type=mouse, bus=ps2 00:04:40.697 ==> default: -- Command line args: 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:04:40.697 ==> default: -> value=-drive, 00:04:40.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:04:40.697 ==> default: -> value=-drive, 00:04:40.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:04:40.697 ==> default: -> value=-drive, 00:04:40.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:40.697 ==> default: -> value=-drive, 00:04:40.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:40.697 ==> default: -> value=-drive, 00:04:40.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:04:40.697 ==> default: -> value=-drive, 00:04:40.697 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:04:40.697 ==> default: -> value=-device, 00:04:40.697 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:04:40.697 ==> default: Creating shared folders metadata... 00:04:40.697 ==> default: Starting domain. 00:04:41.639 ==> default: Waiting for domain to get an IP address... 00:04:56.570 ==> default: Waiting for SSH to become available... 00:04:56.570 ==> default: Configuring and enabling network interfaces... 00:04:59.870 default: SSH address: 192.168.121.216:22 00:04:59.870 default: SSH username: vagrant 00:04:59.870 default: SSH auth method: private key 00:05:01.815 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:05:08.404 ==> default: Mounting SSHFS shared folder... 00:05:09.792 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:05:09.792 ==> default: Checking Mount.. 00:05:10.737 ==> default: Folder Successfully Mounted! 00:05:10.737 00:05:10.737 SUCCESS! 00:05:10.737 00:05:10.737 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:05:10.737 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:05:10.737 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:05:10.737 00:05:10.748 [Pipeline] } 00:05:10.763 [Pipeline] // stage 00:05:10.772 [Pipeline] dir 00:05:10.772 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:05:10.774 [Pipeline] { 00:05:10.789 [Pipeline] catchError 00:05:10.794 [Pipeline] { 00:05:10.815 [Pipeline] sh 00:05:11.098 + vagrant ssh-config --host vagrant 00:05:11.098 + sed -ne '/^Host/,$p' 00:05:11.098 + tee ssh_conf 00:05:13.641 Host vagrant 00:05:13.641 HostName 192.168.121.216 00:05:13.641 User vagrant 00:05:13.641 Port 22 00:05:13.641 UserKnownHostsFile /dev/null 00:05:13.641 StrictHostKeyChecking no 00:05:13.641 PasswordAuthentication no 00:05:13.641 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:05:13.641 IdentitiesOnly yes 00:05:13.641 LogLevel FATAL 00:05:13.641 ForwardAgent yes 00:05:13.641 ForwardX11 yes 00:05:13.641 00:05:13.674 [Pipeline] withEnv 00:05:13.683 [Pipeline] { 00:05:13.706 [Pipeline] sh 00:05:13.996 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:05:13.996 source /etc/os-release 00:05:13.996 [[ -e /image.version ]] && img=$(< /image.version) 00:05:13.996 # Minimal, systemd-like check. 00:05:13.996 if [[ -e /.dockerenv ]]; then 00:05:13.996 # Clear garbage from the node'\''s name: 00:05:13.996 # agt-er_autotest_547-896 -> autotest_547-896 00:05:13.996 # $HOSTNAME is the actual container id 00:05:13.996 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:05:13.996 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:05:13.996 # We can assume this is a mount from a host where container is running, 00:05:13.996 # so fetch its hostname to easily identify the target swarm worker. 00:05:13.996 container="$(< /etc/hostname) ($agent)" 00:05:13.996 else 00:05:13.996 # Fallback 00:05:13.996 container=$agent 00:05:13.996 fi 00:05:13.996 fi 00:05:13.996 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:05:13.996 ' 00:05:14.271 [Pipeline] } 00:05:14.288 [Pipeline] // withEnv 00:05:14.297 [Pipeline] setCustomBuildProperty 00:05:14.313 [Pipeline] stage 00:05:14.315 [Pipeline] { (Tests) 00:05:14.334 [Pipeline] sh 00:05:14.620 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:05:14.636 [Pipeline] sh 00:05:14.921 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:05:14.937 [Pipeline] timeout 00:05:14.938 Timeout set to expire in 50 min 00:05:14.939 [Pipeline] { 00:05:14.954 [Pipeline] sh 00:05:15.240 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:05:15.501 HEAD is now at 3c8001115 accel/mlx5: More precise condition to update DB 00:05:15.516 [Pipeline] sh 00:05:15.823 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:05:15.838 [Pipeline] sh 00:05:16.146 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:05:16.237 [Pipeline] sh 00:05:16.515 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:05:16.515 ++ readlink -f spdk_repo 00:05:16.775 + DIR_ROOT=/home/vagrant/spdk_repo 00:05:16.775 + [[ -n /home/vagrant/spdk_repo ]] 00:05:16.775 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:05:16.775 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:05:16.775 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:05:16.775 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:05:16.775 + [[ -d /home/vagrant/spdk_repo/output ]] 00:05:16.775 + [[ nvme-vg-autotest == pkgdep-* ]] 00:05:16.775 + cd /home/vagrant/spdk_repo 00:05:16.775 + source /etc/os-release 00:05:16.775 ++ NAME='Fedora Linux' 00:05:16.775 ++ VERSION='39 (Cloud Edition)' 00:05:16.775 ++ ID=fedora 00:05:16.775 ++ VERSION_ID=39 00:05:16.775 ++ VERSION_CODENAME= 00:05:16.775 ++ PLATFORM_ID=platform:f39 00:05:16.775 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:16.775 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:16.775 ++ LOGO=fedora-logo-icon 00:05:16.775 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:16.775 ++ HOME_URL=https://fedoraproject.org/ 00:05:16.775 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:16.775 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:16.775 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:16.775 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:16.775 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:16.775 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:16.775 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:16.775 ++ SUPPORT_END=2024-11-12 00:05:16.775 ++ VARIANT='Cloud Edition' 00:05:16.775 ++ VARIANT_ID=cloud 00:05:16.775 + uname -a 00:05:16.775 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:16.775 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:17.035 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.296 Hugepages 00:05:17.296 node hugesize free / total 00:05:17.296 node0 1048576kB 0 / 0 00:05:17.296 node0 2048kB 0 / 0 00:05:17.296 00:05:17.296 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:17.296 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:17.296 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:17.296 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:17.296 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:17.296 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:17.296 + rm -f /tmp/spdk-ld-path 00:05:17.296 + source autorun-spdk.conf 00:05:17.296 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:17.296 ++ SPDK_TEST_NVME=1 00:05:17.296 ++ SPDK_TEST_FTL=1 00:05:17.296 ++ SPDK_TEST_ISAL=1 00:05:17.296 ++ SPDK_RUN_ASAN=1 00:05:17.296 ++ SPDK_RUN_UBSAN=1 00:05:17.296 ++ SPDK_TEST_XNVME=1 00:05:17.296 ++ SPDK_TEST_NVME_FDP=1 00:05:17.296 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:17.296 ++ RUN_NIGHTLY=0 00:05:17.296 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:17.296 + [[ -n '' ]] 00:05:17.296 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:05:17.296 + for M in /var/spdk/build-*-manifest.txt 00:05:17.296 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:17.296 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:17.296 + for M in /var/spdk/build-*-manifest.txt 00:05:17.296 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:17.296 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:17.296 + for M in /var/spdk/build-*-manifest.txt 00:05:17.296 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:17.296 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:05:17.296 ++ uname 00:05:17.296 + [[ Linux == \L\i\n\u\x ]] 00:05:17.296 + sudo dmesg -T 00:05:17.296 + sudo dmesg --clear 00:05:17.296 + dmesg_pid=5020 00:05:17.296 + [[ Fedora Linux == FreeBSD ]] 00:05:17.296 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:17.296 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:17.296 + sudo dmesg -Tw 00:05:17.296 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:17.296 + [[ -x /usr/src/fio-static/fio ]] 00:05:17.296 + export FIO_BIN=/usr/src/fio-static/fio 00:05:17.296 + FIO_BIN=/usr/src/fio-static/fio 00:05:17.296 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:17.296 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:17.296 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:17.296 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:17.296 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:17.296 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:17.296 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:17.296 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:17.296 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:17.557 19:23:36 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:17.557 19:23:36 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:05:17.557 19:23:36 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:05:17.557 19:23:36 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:17.557 19:23:36 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:05:17.557 19:23:36 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:05:17.557 19:23:36 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:17.557 19:23:36 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:17.557 19:23:36 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:17.557 19:23:36 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:17.557 19:23:36 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:17.557 19:23:36 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.557 19:23:36 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.557 19:23:36 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.557 19:23:36 -- paths/export.sh@5 -- $ export PATH 00:05:17.557 19:23:36 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:17.557 19:23:36 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:05:17.557 19:23:36 -- common/autobuild_common.sh@493 -- $ date +%s 00:05:17.557 19:23:36 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733426616.XXXXXX 00:05:17.557 19:23:36 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733426616.4TIzPo 00:05:17.557 19:23:36 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:05:17.557 19:23:36 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:05:17.557 19:23:36 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:05:17.557 19:23:36 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:05:17.558 19:23:36 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:05:17.558 19:23:36 -- common/autobuild_common.sh@509 -- $ get_config_params 00:05:17.558 19:23:36 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:05:17.558 19:23:36 -- common/autotest_common.sh@10 -- $ set +x 00:05:17.558 19:23:36 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:05:17.558 19:23:36 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:05:17.558 19:23:36 -- pm/common@17 -- $ local monitor 00:05:17.558 19:23:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.558 19:23:36 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:17.558 19:23:36 -- pm/common@25 -- $ sleep 1 00:05:17.558 19:23:36 -- pm/common@21 -- $ date +%s 00:05:17.558 19:23:36 -- pm/common@21 -- $ date +%s 00:05:17.558 19:23:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733426616 00:05:17.558 19:23:36 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733426616 00:05:17.558 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733426616_collect-cpu-load.pm.log 00:05:17.558 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733426616_collect-vmstat.pm.log 00:05:18.496 19:23:37 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:05:18.496 19:23:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:18.496 19:23:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:18.496 19:23:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:05:18.496 19:23:37 -- spdk/autobuild.sh@16 -- $ date -u 00:05:18.496 Thu Dec 5 07:23:37 PM UTC 2024 00:05:18.496 19:23:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:18.496 v25.01-pre-299-g3c8001115 00:05:18.496 19:23:37 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:05:18.496 19:23:37 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:05:18.496 19:23:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:18.496 19:23:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:18.496 19:23:37 -- common/autotest_common.sh@10 -- $ set +x 00:05:18.496 ************************************ 00:05:18.496 START TEST asan 00:05:18.496 ************************************ 00:05:18.496 using asan 00:05:18.496 19:23:37 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:05:18.496 00:05:18.496 real 0m0.000s 00:05:18.496 user 0m0.000s 00:05:18.496 sys 0m0.000s 00:05:18.496 19:23:37 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:18.496 ************************************ 00:05:18.496 END TEST asan 00:05:18.496 19:23:37 asan -- common/autotest_common.sh@10 -- $ set +x 00:05:18.496 ************************************ 00:05:18.496 19:23:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:18.496 19:23:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:18.496 19:23:37 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:18.496 19:23:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:18.496 19:23:37 -- common/autotest_common.sh@10 -- $ set +x 00:05:18.496 ************************************ 00:05:18.496 START TEST ubsan 00:05:18.496 ************************************ 00:05:18.496 using ubsan 00:05:18.496 19:23:37 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:05:18.496 00:05:18.496 real 0m0.000s 00:05:18.496 user 0m0.000s 00:05:18.496 sys 0m0.000s 00:05:18.496 19:23:37 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:18.496 19:23:37 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:18.496 ************************************ 00:05:18.496 END TEST ubsan 00:05:18.496 ************************************ 00:05:18.496 19:23:37 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:18.496 19:23:37 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:18.496 19:23:37 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:18.496 19:23:37 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:18.496 19:23:37 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:18.496 19:23:37 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:18.496 19:23:37 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:18.496 19:23:37 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:18.496 19:23:37 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:05:18.768 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:18.768 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:19.027 Using 'verbs' RDMA provider 00:05:29.958 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:05:39.956 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:05:40.527 Creating mk/config.mk...done. 00:05:40.527 Creating mk/cc.flags.mk...done. 00:05:40.527 Type 'make' to build. 00:05:40.527 19:23:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:05:40.527 19:23:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:05:40.527 19:23:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:05:40.527 19:23:59 -- common/autotest_common.sh@10 -- $ set +x 00:05:40.527 ************************************ 00:05:40.527 START TEST make 00:05:40.527 ************************************ 00:05:40.527 19:23:59 make -- common/autotest_common.sh@1129 -- $ make -j10 00:05:40.786 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:05:40.786 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:05:40.786 meson setup builddir \ 00:05:40.786 -Dwith-libaio=enabled \ 00:05:40.786 -Dwith-liburing=enabled \ 00:05:40.786 -Dwith-libvfn=disabled \ 00:05:40.786 -Dwith-spdk=disabled \ 00:05:40.786 -Dexamples=false \ 00:05:40.786 -Dtests=false \ 00:05:40.786 -Dtools=false && \ 00:05:40.786 meson compile -C builddir && \ 00:05:40.786 cd -) 00:05:40.786 make[1]: Nothing to be done for 'all'. 00:05:42.767 The Meson build system 00:05:42.767 Version: 1.5.0 00:05:42.767 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:05:42.767 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:42.767 Build type: native build 00:05:42.767 Project name: xnvme 00:05:42.767 Project version: 0.7.5 00:05:42.767 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:42.767 C linker for the host machine: cc ld.bfd 2.40-14 00:05:42.767 Host machine cpu family: x86_64 00:05:42.767 Host machine cpu: x86_64 00:05:42.767 Message: host_machine.system: linux 00:05:42.767 Compiler for C supports arguments -Wno-missing-braces: YES 00:05:42.767 Compiler for C supports arguments -Wno-cast-function-type: YES 00:05:42.767 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:05:42.767 Run-time dependency threads found: YES 00:05:42.767 Has header "setupapi.h" : NO 00:05:42.767 Has header "linux/blkzoned.h" : YES 00:05:42.767 Has header "linux/blkzoned.h" : YES (cached) 00:05:42.767 Has header "libaio.h" : YES 00:05:42.767 Library aio found: YES 00:05:42.767 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:42.767 Run-time dependency liburing found: YES 2.2 00:05:42.767 Dependency libvfn skipped: feature with-libvfn disabled 00:05:42.767 Found CMake: /usr/bin/cmake (3.27.7) 00:05:42.767 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:05:42.767 Subproject spdk : skipped: feature with-spdk disabled 00:05:42.767 Run-time dependency appleframeworks found: NO (tried framework) 00:05:42.767 Run-time dependency appleframeworks found: NO (tried framework) 00:05:42.767 Library rt found: YES 00:05:42.767 Checking for function "clock_gettime" with dependency -lrt: YES 00:05:42.767 Configuring xnvme_config.h using configuration 00:05:42.767 Configuring xnvme.spec using configuration 00:05:42.767 Run-time dependency bash-completion found: YES 2.11 00:05:42.767 Message: Bash-completions: /usr/share/bash-completion/completions 00:05:42.767 Program cp found: YES (/usr/bin/cp) 00:05:42.768 Build targets in project: 3 00:05:42.768 00:05:42.768 xnvme 0.7.5 00:05:42.768 00:05:42.768 Subprojects 00:05:42.768 spdk : NO Feature 'with-spdk' disabled 00:05:42.768 00:05:42.768 User defined options 00:05:42.768 examples : false 00:05:42.768 tests : false 00:05:42.768 tools : false 00:05:42.768 with-libaio : enabled 00:05:42.768 with-liburing: enabled 00:05:42.768 with-libvfn : disabled 00:05:42.768 with-spdk : disabled 00:05:42.768 00:05:42.768 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:43.028 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:05:43.290 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:05:43.290 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:05:43.291 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:05:43.291 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:05:43.291 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:05:43.291 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:05:43.291 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:05:43.291 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:05:43.291 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:05:43.291 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:05:43.291 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:05:43.291 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:05:43.291 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:05:43.291 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:05:43.291 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:05:43.291 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:05:43.291 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:05:43.291 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:05:43.291 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:05:43.553 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:05:43.553 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:05:43.553 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:05:43.553 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:05:43.553 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:05:43.553 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:05:43.553 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:05:43.553 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:05:43.553 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:05:43.553 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:05:43.553 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:05:43.553 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:05:43.553 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:05:43.553 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:05:43.553 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:05:43.553 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:05:43.553 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:05:43.553 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:05:43.553 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:05:43.553 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:05:43.553 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:05:43.553 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:05:43.553 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:05:43.553 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:05:43.553 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:05:43.553 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:05:43.553 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:05:43.553 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:05:43.553 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:05:43.553 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:05:43.553 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:05:43.553 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:05:43.553 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:05:43.553 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:05:43.553 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:05:43.553 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:05:43.553 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:05:43.814 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:05:43.814 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:05:43.814 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:05:43.814 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:05:43.814 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:05:43.814 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:05:43.814 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:05:43.814 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:05:43.814 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:05:43.814 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:05:43.814 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:05:43.814 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:05:43.814 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:05:43.814 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:05:43.814 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:05:43.814 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:05:44.074 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:05:44.371 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:05:44.371 [75/76] Linking static target lib/libxnvme.a 00:05:44.371 [76/76] Linking target lib/libxnvme.so.0.7.5 00:05:44.371 INFO: autodetecting backend as ninja 00:05:44.371 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:05:44.371 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:05:50.988 The Meson build system 00:05:50.988 Version: 1.5.0 00:05:50.988 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:05:50.988 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:05:50.988 Build type: native build 00:05:50.988 Program cat found: YES (/usr/bin/cat) 00:05:50.988 Project name: DPDK 00:05:50.988 Project version: 24.03.0 00:05:50.988 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:50.988 C linker for the host machine: cc ld.bfd 2.40-14 00:05:50.988 Host machine cpu family: x86_64 00:05:50.988 Host machine cpu: x86_64 00:05:50.988 Message: ## Building in Developer Mode ## 00:05:50.988 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:50.988 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:05:50.988 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:50.988 Program python3 found: YES (/usr/bin/python3) 00:05:50.988 Program cat found: YES (/usr/bin/cat) 00:05:50.988 Compiler for C supports arguments -march=native: YES 00:05:50.988 Checking for size of "void *" : 8 00:05:50.988 Checking for size of "void *" : 8 (cached) 00:05:50.988 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:50.988 Library m found: YES 00:05:50.988 Library numa found: YES 00:05:50.988 Has header "numaif.h" : YES 00:05:50.988 Library fdt found: NO 00:05:50.988 Library execinfo found: NO 00:05:50.988 Has header "execinfo.h" : YES 00:05:50.988 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:50.988 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:50.988 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:50.988 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:50.988 Run-time dependency openssl found: YES 3.1.1 00:05:50.988 Run-time dependency libpcap found: YES 1.10.4 00:05:50.988 Has header "pcap.h" with dependency libpcap: YES 00:05:50.988 Compiler for C supports arguments -Wcast-qual: YES 00:05:50.988 Compiler for C supports arguments -Wdeprecated: YES 00:05:50.988 Compiler for C supports arguments -Wformat: YES 00:05:50.988 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:50.988 Compiler for C supports arguments -Wformat-security: NO 00:05:50.988 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:50.988 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:50.988 Compiler for C supports arguments -Wnested-externs: YES 00:05:50.988 Compiler for C supports arguments -Wold-style-definition: YES 00:05:50.988 Compiler for C supports arguments -Wpointer-arith: YES 00:05:50.988 Compiler for C supports arguments -Wsign-compare: YES 00:05:50.988 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:50.988 Compiler for C supports arguments -Wundef: YES 00:05:50.988 Compiler for C supports arguments -Wwrite-strings: YES 00:05:50.988 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:50.988 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:50.988 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:50.988 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:50.988 Program objdump found: YES (/usr/bin/objdump) 00:05:50.988 Compiler for C supports arguments -mavx512f: YES 00:05:50.988 Checking if "AVX512 checking" compiles: YES 00:05:50.988 Fetching value of define "__SSE4_2__" : 1 00:05:50.988 Fetching value of define "__AES__" : 1 00:05:50.988 Fetching value of define "__AVX__" : 1 00:05:50.988 Fetching value of define "__AVX2__" : 1 00:05:50.988 Fetching value of define "__AVX512BW__" : 1 00:05:50.988 Fetching value of define "__AVX512CD__" : 1 00:05:50.988 Fetching value of define "__AVX512DQ__" : 1 00:05:50.988 Fetching value of define "__AVX512F__" : 1 00:05:50.988 Fetching value of define "__AVX512VL__" : 1 00:05:50.988 Fetching value of define "__PCLMUL__" : 1 00:05:50.988 Fetching value of define "__RDRND__" : 1 00:05:50.988 Fetching value of define "__RDSEED__" : 1 00:05:50.988 Fetching value of define "__VPCLMULQDQ__" : 1 00:05:50.988 Fetching value of define "__znver1__" : (undefined) 00:05:50.988 Fetching value of define "__znver2__" : (undefined) 00:05:50.988 Fetching value of define "__znver3__" : (undefined) 00:05:50.988 Fetching value of define "__znver4__" : (undefined) 00:05:50.988 Library asan found: YES 00:05:50.988 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:50.988 Message: lib/log: Defining dependency "log" 00:05:50.988 Message: lib/kvargs: Defining dependency "kvargs" 00:05:50.988 Message: lib/telemetry: Defining dependency "telemetry" 00:05:50.988 Library rt found: YES 00:05:50.988 Checking for function "getentropy" : NO 00:05:50.988 Message: lib/eal: Defining dependency "eal" 00:05:50.988 Message: lib/ring: Defining dependency "ring" 00:05:50.988 Message: lib/rcu: Defining dependency "rcu" 00:05:50.988 Message: lib/mempool: Defining dependency "mempool" 00:05:50.988 Message: lib/mbuf: Defining dependency "mbuf" 00:05:50.988 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:50.988 Fetching value of define "__AVX512F__" : 1 (cached) 00:05:50.988 Fetching value of define "__AVX512BW__" : 1 (cached) 00:05:50.988 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:05:50.988 Fetching value of define "__AVX512VL__" : 1 (cached) 00:05:50.988 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:05:50.988 Compiler for C supports arguments -mpclmul: YES 00:05:50.988 Compiler for C supports arguments -maes: YES 00:05:50.988 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:50.988 Compiler for C supports arguments -mavx512bw: YES 00:05:50.988 Compiler for C supports arguments -mavx512dq: YES 00:05:50.988 Compiler for C supports arguments -mavx512vl: YES 00:05:50.988 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:50.988 Compiler for C supports arguments -mavx2: YES 00:05:50.988 Compiler for C supports arguments -mavx: YES 00:05:50.988 Message: lib/net: Defining dependency "net" 00:05:50.988 Message: lib/meter: Defining dependency "meter" 00:05:50.988 Message: lib/ethdev: Defining dependency "ethdev" 00:05:50.988 Message: lib/pci: Defining dependency "pci" 00:05:50.988 Message: lib/cmdline: Defining dependency "cmdline" 00:05:50.988 Message: lib/hash: Defining dependency "hash" 00:05:50.988 Message: lib/timer: Defining dependency "timer" 00:05:50.988 Message: lib/compressdev: Defining dependency "compressdev" 00:05:50.988 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:50.988 Message: lib/dmadev: Defining dependency "dmadev" 00:05:50.988 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:50.988 Message: lib/power: Defining dependency "power" 00:05:50.988 Message: lib/reorder: Defining dependency "reorder" 00:05:50.988 Message: lib/security: Defining dependency "security" 00:05:50.988 Has header "linux/userfaultfd.h" : YES 00:05:50.988 Has header "linux/vduse.h" : YES 00:05:50.988 Message: lib/vhost: Defining dependency "vhost" 00:05:50.988 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:50.988 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:50.988 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:50.988 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:50.988 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:50.988 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:50.988 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:50.988 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:50.988 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:50.988 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:50.988 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:50.988 Configuring doxy-api-html.conf using configuration 00:05:50.988 Configuring doxy-api-man.conf using configuration 00:05:50.988 Program mandb found: YES (/usr/bin/mandb) 00:05:50.988 Program sphinx-build found: NO 00:05:50.988 Configuring rte_build_config.h using configuration 00:05:50.988 Message: 00:05:50.988 ================= 00:05:50.988 Applications Enabled 00:05:50.988 ================= 00:05:50.988 00:05:50.988 apps: 00:05:50.988 00:05:50.988 00:05:50.988 Message: 00:05:50.988 ================= 00:05:50.988 Libraries Enabled 00:05:50.988 ================= 00:05:50.988 00:05:50.988 libs: 00:05:50.988 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:50.988 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:50.988 cryptodev, dmadev, power, reorder, security, vhost, 00:05:50.988 00:05:50.988 Message: 00:05:50.988 =============== 00:05:50.988 Drivers Enabled 00:05:50.988 =============== 00:05:50.988 00:05:50.988 common: 00:05:50.988 00:05:50.988 bus: 00:05:50.988 pci, vdev, 00:05:50.988 mempool: 00:05:50.988 ring, 00:05:50.988 dma: 00:05:50.988 00:05:50.988 net: 00:05:50.988 00:05:50.988 crypto: 00:05:50.988 00:05:50.988 compress: 00:05:50.988 00:05:50.988 vdpa: 00:05:50.988 00:05:50.988 00:05:50.988 Message: 00:05:50.988 ================= 00:05:50.988 Content Skipped 00:05:50.988 ================= 00:05:50.988 00:05:50.988 apps: 00:05:50.988 dumpcap: explicitly disabled via build config 00:05:50.989 graph: explicitly disabled via build config 00:05:50.989 pdump: explicitly disabled via build config 00:05:50.989 proc-info: explicitly disabled via build config 00:05:50.989 test-acl: explicitly disabled via build config 00:05:50.989 test-bbdev: explicitly disabled via build config 00:05:50.989 test-cmdline: explicitly disabled via build config 00:05:50.989 test-compress-perf: explicitly disabled via build config 00:05:50.989 test-crypto-perf: explicitly disabled via build config 00:05:50.989 test-dma-perf: explicitly disabled via build config 00:05:50.989 test-eventdev: explicitly disabled via build config 00:05:50.989 test-fib: explicitly disabled via build config 00:05:50.989 test-flow-perf: explicitly disabled via build config 00:05:50.989 test-gpudev: explicitly disabled via build config 00:05:50.989 test-mldev: explicitly disabled via build config 00:05:50.989 test-pipeline: explicitly disabled via build config 00:05:50.989 test-pmd: explicitly disabled via build config 00:05:50.989 test-regex: explicitly disabled via build config 00:05:50.989 test-sad: explicitly disabled via build config 00:05:50.989 test-security-perf: explicitly disabled via build config 00:05:50.989 00:05:50.989 libs: 00:05:50.989 argparse: explicitly disabled via build config 00:05:50.989 metrics: explicitly disabled via build config 00:05:50.989 acl: explicitly disabled via build config 00:05:50.989 bbdev: explicitly disabled via build config 00:05:50.989 bitratestats: explicitly disabled via build config 00:05:50.989 bpf: explicitly disabled via build config 00:05:50.989 cfgfile: explicitly disabled via build config 00:05:50.989 distributor: explicitly disabled via build config 00:05:50.989 efd: explicitly disabled via build config 00:05:50.989 eventdev: explicitly disabled via build config 00:05:50.989 dispatcher: explicitly disabled via build config 00:05:50.989 gpudev: explicitly disabled via build config 00:05:50.989 gro: explicitly disabled via build config 00:05:50.989 gso: explicitly disabled via build config 00:05:50.989 ip_frag: explicitly disabled via build config 00:05:50.989 jobstats: explicitly disabled via build config 00:05:50.989 latencystats: explicitly disabled via build config 00:05:50.989 lpm: explicitly disabled via build config 00:05:50.989 member: explicitly disabled via build config 00:05:50.989 pcapng: explicitly disabled via build config 00:05:50.989 rawdev: explicitly disabled via build config 00:05:50.989 regexdev: explicitly disabled via build config 00:05:50.989 mldev: explicitly disabled via build config 00:05:50.989 rib: explicitly disabled via build config 00:05:50.989 sched: explicitly disabled via build config 00:05:50.989 stack: explicitly disabled via build config 00:05:50.989 ipsec: explicitly disabled via build config 00:05:50.989 pdcp: explicitly disabled via build config 00:05:50.989 fib: explicitly disabled via build config 00:05:50.989 port: explicitly disabled via build config 00:05:50.989 pdump: explicitly disabled via build config 00:05:50.989 table: explicitly disabled via build config 00:05:50.989 pipeline: explicitly disabled via build config 00:05:50.989 graph: explicitly disabled via build config 00:05:50.989 node: explicitly disabled via build config 00:05:50.989 00:05:50.989 drivers: 00:05:50.989 common/cpt: not in enabled drivers build config 00:05:50.989 common/dpaax: not in enabled drivers build config 00:05:50.989 common/iavf: not in enabled drivers build config 00:05:50.989 common/idpf: not in enabled drivers build config 00:05:50.989 common/ionic: not in enabled drivers build config 00:05:50.989 common/mvep: not in enabled drivers build config 00:05:50.989 common/octeontx: not in enabled drivers build config 00:05:50.989 bus/auxiliary: not in enabled drivers build config 00:05:50.989 bus/cdx: not in enabled drivers build config 00:05:50.989 bus/dpaa: not in enabled drivers build config 00:05:50.989 bus/fslmc: not in enabled drivers build config 00:05:50.989 bus/ifpga: not in enabled drivers build config 00:05:50.989 bus/platform: not in enabled drivers build config 00:05:50.989 bus/uacce: not in enabled drivers build config 00:05:50.989 bus/vmbus: not in enabled drivers build config 00:05:50.989 common/cnxk: not in enabled drivers build config 00:05:50.989 common/mlx5: not in enabled drivers build config 00:05:50.989 common/nfp: not in enabled drivers build config 00:05:50.989 common/nitrox: not in enabled drivers build config 00:05:50.989 common/qat: not in enabled drivers build config 00:05:50.989 common/sfc_efx: not in enabled drivers build config 00:05:50.989 mempool/bucket: not in enabled drivers build config 00:05:50.989 mempool/cnxk: not in enabled drivers build config 00:05:50.989 mempool/dpaa: not in enabled drivers build config 00:05:50.989 mempool/dpaa2: not in enabled drivers build config 00:05:50.989 mempool/octeontx: not in enabled drivers build config 00:05:50.989 mempool/stack: not in enabled drivers build config 00:05:50.989 dma/cnxk: not in enabled drivers build config 00:05:50.989 dma/dpaa: not in enabled drivers build config 00:05:50.989 dma/dpaa2: not in enabled drivers build config 00:05:50.989 dma/hisilicon: not in enabled drivers build config 00:05:50.989 dma/idxd: not in enabled drivers build config 00:05:50.989 dma/ioat: not in enabled drivers build config 00:05:50.989 dma/skeleton: not in enabled drivers build config 00:05:50.989 net/af_packet: not in enabled drivers build config 00:05:50.989 net/af_xdp: not in enabled drivers build config 00:05:50.989 net/ark: not in enabled drivers build config 00:05:50.989 net/atlantic: not in enabled drivers build config 00:05:50.989 net/avp: not in enabled drivers build config 00:05:50.989 net/axgbe: not in enabled drivers build config 00:05:50.989 net/bnx2x: not in enabled drivers build config 00:05:50.989 net/bnxt: not in enabled drivers build config 00:05:50.989 net/bonding: not in enabled drivers build config 00:05:50.989 net/cnxk: not in enabled drivers build config 00:05:50.989 net/cpfl: not in enabled drivers build config 00:05:50.989 net/cxgbe: not in enabled drivers build config 00:05:50.989 net/dpaa: not in enabled drivers build config 00:05:50.989 net/dpaa2: not in enabled drivers build config 00:05:50.989 net/e1000: not in enabled drivers build config 00:05:50.989 net/ena: not in enabled drivers build config 00:05:50.989 net/enetc: not in enabled drivers build config 00:05:50.989 net/enetfec: not in enabled drivers build config 00:05:50.989 net/enic: not in enabled drivers build config 00:05:50.989 net/failsafe: not in enabled drivers build config 00:05:50.989 net/fm10k: not in enabled drivers build config 00:05:50.989 net/gve: not in enabled drivers build config 00:05:50.989 net/hinic: not in enabled drivers build config 00:05:50.989 net/hns3: not in enabled drivers build config 00:05:50.989 net/i40e: not in enabled drivers build config 00:05:50.989 net/iavf: not in enabled drivers build config 00:05:50.989 net/ice: not in enabled drivers build config 00:05:50.989 net/idpf: not in enabled drivers build config 00:05:50.989 net/igc: not in enabled drivers build config 00:05:50.989 net/ionic: not in enabled drivers build config 00:05:50.989 net/ipn3ke: not in enabled drivers build config 00:05:50.989 net/ixgbe: not in enabled drivers build config 00:05:50.989 net/mana: not in enabled drivers build config 00:05:50.989 net/memif: not in enabled drivers build config 00:05:50.989 net/mlx4: not in enabled drivers build config 00:05:50.989 net/mlx5: not in enabled drivers build config 00:05:50.989 net/mvneta: not in enabled drivers build config 00:05:50.989 net/mvpp2: not in enabled drivers build config 00:05:50.989 net/netvsc: not in enabled drivers build config 00:05:50.989 net/nfb: not in enabled drivers build config 00:05:50.989 net/nfp: not in enabled drivers build config 00:05:50.989 net/ngbe: not in enabled drivers build config 00:05:50.989 net/null: not in enabled drivers build config 00:05:50.989 net/octeontx: not in enabled drivers build config 00:05:50.989 net/octeon_ep: not in enabled drivers build config 00:05:50.989 net/pcap: not in enabled drivers build config 00:05:50.989 net/pfe: not in enabled drivers build config 00:05:50.989 net/qede: not in enabled drivers build config 00:05:50.989 net/ring: not in enabled drivers build config 00:05:50.989 net/sfc: not in enabled drivers build config 00:05:50.989 net/softnic: not in enabled drivers build config 00:05:50.989 net/tap: not in enabled drivers build config 00:05:50.989 net/thunderx: not in enabled drivers build config 00:05:50.989 net/txgbe: not in enabled drivers build config 00:05:50.989 net/vdev_netvsc: not in enabled drivers build config 00:05:50.989 net/vhost: not in enabled drivers build config 00:05:50.989 net/virtio: not in enabled drivers build config 00:05:50.989 net/vmxnet3: not in enabled drivers build config 00:05:50.989 raw/*: missing internal dependency, "rawdev" 00:05:50.989 crypto/armv8: not in enabled drivers build config 00:05:50.989 crypto/bcmfs: not in enabled drivers build config 00:05:50.989 crypto/caam_jr: not in enabled drivers build config 00:05:50.989 crypto/ccp: not in enabled drivers build config 00:05:50.989 crypto/cnxk: not in enabled drivers build config 00:05:50.989 crypto/dpaa_sec: not in enabled drivers build config 00:05:50.989 crypto/dpaa2_sec: not in enabled drivers build config 00:05:50.989 crypto/ipsec_mb: not in enabled drivers build config 00:05:50.989 crypto/mlx5: not in enabled drivers build config 00:05:50.989 crypto/mvsam: not in enabled drivers build config 00:05:50.989 crypto/nitrox: not in enabled drivers build config 00:05:50.989 crypto/null: not in enabled drivers build config 00:05:50.989 crypto/octeontx: not in enabled drivers build config 00:05:50.989 crypto/openssl: not in enabled drivers build config 00:05:50.989 crypto/scheduler: not in enabled drivers build config 00:05:50.989 crypto/uadk: not in enabled drivers build config 00:05:50.989 crypto/virtio: not in enabled drivers build config 00:05:50.989 compress/isal: not in enabled drivers build config 00:05:50.989 compress/mlx5: not in enabled drivers build config 00:05:50.989 compress/nitrox: not in enabled drivers build config 00:05:50.989 compress/octeontx: not in enabled drivers build config 00:05:50.989 compress/zlib: not in enabled drivers build config 00:05:50.989 regex/*: missing internal dependency, "regexdev" 00:05:50.989 ml/*: missing internal dependency, "mldev" 00:05:50.989 vdpa/ifc: not in enabled drivers build config 00:05:50.989 vdpa/mlx5: not in enabled drivers build config 00:05:50.989 vdpa/nfp: not in enabled drivers build config 00:05:50.989 vdpa/sfc: not in enabled drivers build config 00:05:50.989 event/*: missing internal dependency, "eventdev" 00:05:50.989 baseband/*: missing internal dependency, "bbdev" 00:05:50.989 gpu/*: missing internal dependency, "gpudev" 00:05:50.989 00:05:50.989 00:05:51.557 Build targets in project: 84 00:05:51.557 00:05:51.557 DPDK 24.03.0 00:05:51.557 00:05:51.557 User defined options 00:05:51.557 buildtype : debug 00:05:51.557 default_library : shared 00:05:51.557 libdir : lib 00:05:51.557 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:05:51.557 b_sanitize : address 00:05:51.557 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:51.557 c_link_args : 00:05:51.557 cpu_instruction_set: native 00:05:51.557 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:05:51.557 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:05:51.557 enable_docs : false 00:05:51.557 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:05:51.557 enable_kmods : false 00:05:51.557 max_lcores : 128 00:05:51.557 tests : false 00:05:51.557 00:05:51.557 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:52.126 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:05:52.126 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:52.126 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:52.126 [3/267] Linking static target lib/librte_kvargs.a 00:05:52.126 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:52.126 [5/267] Linking static target lib/librte_log.a 00:05:52.126 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:52.385 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:52.385 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:52.385 [9/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.385 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:52.646 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:52.646 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:52.646 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:52.646 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:52.646 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:52.646 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:52.646 [17/267] Linking static target lib/librte_telemetry.a 00:05:52.906 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:52.906 [19/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:52.906 [20/267] Linking target lib/librte_log.so.24.1 00:05:52.906 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:53.168 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:53.168 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:53.168 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:53.168 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:53.168 [26/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:53.168 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:53.168 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:53.168 [29/267] Linking target lib/librte_kvargs.so.24.1 00:05:53.168 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:53.168 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:53.428 [32/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:53.428 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:53.428 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:53.428 [35/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.428 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:53.428 [37/267] Linking target lib/librte_telemetry.so.24.1 00:05:53.688 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:53.688 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:53.688 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:53.688 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:53.688 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:53.688 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:53.688 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:53.688 [45/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:53.688 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:53.949 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:53.949 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:53.949 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:54.209 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:54.209 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:54.209 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:54.209 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:54.209 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:54.209 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:54.209 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:54.469 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:54.469 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:54.469 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:54.469 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:54.469 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:54.469 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:54.469 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:54.731 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:54.731 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:54.731 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:54.731 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:54.731 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:54.992 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:54.992 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:54.992 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:54.992 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:54.992 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:54.992 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:54.992 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:55.254 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:55.254 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:55.254 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:55.254 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:55.254 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:55.254 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:55.515 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:55.515 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:55.515 [84/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:55.515 [85/267] Linking static target lib/librte_ring.a 00:05:55.515 [86/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:55.515 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:55.515 [88/267] Linking static target lib/librte_eal.a 00:05:55.777 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:55.777 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:55.777 [91/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.051 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:56.051 [93/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:56.051 [94/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:56.051 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:56.051 [96/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:56.051 [97/267] Linking static target lib/librte_mempool.a 00:05:56.051 [98/267] Linking static target lib/librte_rcu.a 00:05:56.051 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:56.320 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:56.320 [101/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:05:56.320 [102/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:56.320 [103/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:56.320 [104/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:56.582 [105/267] Linking static target lib/librte_meter.a 00:05:56.582 [106/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:56.582 [107/267] Linking static target lib/librte_mbuf.a 00:05:56.582 [108/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:56.582 [109/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.582 [110/267] Linking static target lib/librte_net.a 00:05:56.582 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:56.582 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:56.844 [113/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.844 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:56.844 [115/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:56.844 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:57.105 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:57.105 [118/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.105 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:57.368 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:57.368 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:57.368 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.628 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:57.628 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:57.628 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:57.628 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:57.628 [127/267] Linking static target lib/librte_pci.a 00:05:57.628 [128/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:57.628 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:57.628 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:57.628 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:57.889 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:57.889 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:57.889 [134/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:57.889 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:57.889 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:57.889 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:57.889 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:57.889 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:57.889 [140/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:57.889 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:57.889 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:57.889 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:57.889 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:57.889 [145/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:58.157 [146/267] Linking static target lib/librte_cmdline.a 00:05:58.157 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:58.468 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:58.468 [149/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:58.468 [150/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:58.468 [151/267] Linking static target lib/librte_ethdev.a 00:05:58.468 [152/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:58.468 [153/267] Linking static target lib/librte_timer.a 00:05:58.468 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:58.729 [155/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:58.729 [156/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:58.729 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:58.729 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:58.729 [159/267] Linking static target lib/librte_compressdev.a 00:05:58.991 [160/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:58.991 [161/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:58.991 [162/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:58.991 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:58.991 [164/267] Linking static target lib/librte_hash.a 00:05:58.991 [165/267] Linking static target lib/librte_dmadev.a 00:05:58.991 [166/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:58.991 [167/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.251 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:59.251 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:59.512 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:59.512 [171/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.512 [172/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:59.512 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:59.512 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.773 [175/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:59.773 [176/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:59.773 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:59.773 [178/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:59.773 [179/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:59.773 [180/267] Linking static target lib/librte_cryptodev.a 00:05:59.773 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:06:00.035 [182/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:06:00.035 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:06:00.035 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:06:00.035 [185/267] Linking static target lib/librte_power.a 00:06:00.296 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:06:00.296 [187/267] Linking static target lib/librte_reorder.a 00:06:00.296 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:06:00.296 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:06:00.296 [190/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:06:00.296 [191/267] Linking static target lib/librte_security.a 00:06:00.296 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:06:00.559 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:06:00.820 [194/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:06:00.820 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:06:01.082 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:06:01.082 [197/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.082 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:06:01.082 [199/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:06:01.082 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:06:01.082 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:06:01.345 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:06:01.345 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:06:01.345 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:06:01.345 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:06:01.607 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:06:01.607 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:06:01.607 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:06:01.607 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:06:01.607 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:01.607 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:06:01.607 [212/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:06:01.607 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:01.607 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:06:01.607 [215/267] Linking static target drivers/librte_bus_vdev.a 00:06:01.868 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:01.868 [217/267] Linking static target drivers/librte_bus_pci.a 00:06:01.868 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:06:01.868 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:06:01.868 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:06:01.868 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.129 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:06:02.129 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:02.129 [224/267] Linking static target drivers/librte_mempool_ring.a 00:06:02.129 [225/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:06:02.129 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:06:02.701 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:06:03.642 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:06:03.642 [229/267] Linking target lib/librte_eal.so.24.1 00:06:03.642 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:06:03.903 [231/267] Linking target lib/librte_ring.so.24.1 00:06:03.903 [232/267] Linking target lib/librte_dmadev.so.24.1 00:06:03.903 [233/267] Linking target lib/librte_timer.so.24.1 00:06:03.903 [234/267] Linking target lib/librte_meter.so.24.1 00:06:03.903 [235/267] Linking target drivers/librte_bus_vdev.so.24.1 00:06:03.903 [236/267] Linking target lib/librte_pci.so.24.1 00:06:03.903 [237/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:06:03.903 [238/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:06:03.903 [239/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:06:03.903 [240/267] Linking target lib/librte_rcu.so.24.1 00:06:03.903 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:06:03.903 [242/267] Linking target lib/librte_mempool.so.24.1 00:06:03.903 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:06:03.903 [244/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:06:04.165 [245/267] Linking target drivers/librte_bus_pci.so.24.1 00:06:04.165 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:06:04.165 [247/267] Linking target lib/librte_mbuf.so.24.1 00:06:04.165 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:06:04.165 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:06:04.165 [250/267] Linking target lib/librte_net.so.24.1 00:06:04.165 [251/267] Linking target lib/librte_reorder.so.24.1 00:06:04.165 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:06:04.165 [253/267] Linking target lib/librte_compressdev.so.24.1 00:06:04.425 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:06:04.425 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:06:04.425 [256/267] Linking target lib/librte_hash.so.24.1 00:06:04.425 [257/267] Linking target lib/librte_cmdline.so.24.1 00:06:04.425 [258/267] Linking target lib/librte_security.so.24.1 00:06:04.425 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:06:04.425 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:06:04.425 [261/267] Linking target lib/librte_ethdev.so.24.1 00:06:04.686 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:06:04.686 [263/267] Linking target lib/librte_power.so.24.1 00:06:06.075 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:06:06.075 [265/267] Linking static target lib/librte_vhost.a 00:06:07.477 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:06:07.477 [267/267] Linking target lib/librte_vhost.so.24.1 00:06:07.477 INFO: autodetecting backend as ninja 00:06:07.477 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:06:25.643 CC lib/log/log_flags.o 00:06:25.643 CC lib/log/log.o 00:06:25.643 CC lib/log/log_deprecated.o 00:06:25.643 CC lib/ut/ut.o 00:06:25.643 CC lib/ut_mock/mock.o 00:06:25.643 LIB libspdk_ut.a 00:06:25.643 LIB libspdk_ut_mock.a 00:06:25.643 LIB libspdk_log.a 00:06:25.643 SO libspdk_ut.so.2.0 00:06:25.643 SO libspdk_ut_mock.so.6.0 00:06:25.643 SO libspdk_log.so.7.1 00:06:25.643 SYMLINK libspdk_ut.so 00:06:25.643 SYMLINK libspdk_ut_mock.so 00:06:25.643 SYMLINK libspdk_log.so 00:06:25.643 CC lib/util/base64.o 00:06:25.643 CC lib/util/bit_array.o 00:06:25.643 CC lib/util/cpuset.o 00:06:25.643 CC lib/ioat/ioat.o 00:06:25.643 CC lib/util/crc16.o 00:06:25.643 CC lib/util/crc32.o 00:06:25.643 CC lib/util/crc32c.o 00:06:25.643 CC lib/dma/dma.o 00:06:25.643 CXX lib/trace_parser/trace.o 00:06:25.643 CC lib/util/crc32_ieee.o 00:06:25.643 CC lib/vfio_user/host/vfio_user_pci.o 00:06:25.643 CC lib/vfio_user/host/vfio_user.o 00:06:25.643 CC lib/util/crc64.o 00:06:25.643 CC lib/util/dif.o 00:06:25.643 CC lib/util/fd.o 00:06:25.643 LIB libspdk_dma.a 00:06:25.643 CC lib/util/fd_group.o 00:06:25.643 CC lib/util/file.o 00:06:25.643 SO libspdk_dma.so.5.0 00:06:25.643 CC lib/util/hexlify.o 00:06:25.643 SYMLINK libspdk_dma.so 00:06:25.643 CC lib/util/iov.o 00:06:25.643 CC lib/util/math.o 00:06:25.643 CC lib/util/net.o 00:06:25.643 LIB libspdk_ioat.a 00:06:25.643 SO libspdk_ioat.so.7.0 00:06:25.643 CC lib/util/pipe.o 00:06:25.643 LIB libspdk_vfio_user.a 00:06:25.643 CC lib/util/strerror_tls.o 00:06:25.643 SYMLINK libspdk_ioat.so 00:06:25.643 SO libspdk_vfio_user.so.5.0 00:06:25.643 CC lib/util/string.o 00:06:25.643 CC lib/util/uuid.o 00:06:25.643 CC lib/util/xor.o 00:06:25.643 CC lib/util/zipf.o 00:06:25.643 SYMLINK libspdk_vfio_user.so 00:06:25.643 CC lib/util/md5.o 00:06:25.643 LIB libspdk_util.a 00:06:25.643 SO libspdk_util.so.10.1 00:06:25.643 LIB libspdk_trace_parser.a 00:06:25.643 SO libspdk_trace_parser.so.6.0 00:06:25.643 SYMLINK libspdk_util.so 00:06:25.643 SYMLINK libspdk_trace_parser.so 00:06:25.643 CC lib/json/json_parse.o 00:06:25.644 CC lib/json/json_util.o 00:06:25.644 CC lib/json/json_write.o 00:06:25.644 CC lib/idxd/idxd.o 00:06:25.644 CC lib/vmd/vmd.o 00:06:25.644 CC lib/vmd/led.o 00:06:25.644 CC lib/idxd/idxd_user.o 00:06:25.644 CC lib/conf/conf.o 00:06:25.644 CC lib/rdma_utils/rdma_utils.o 00:06:25.644 CC lib/env_dpdk/env.o 00:06:25.644 CC lib/env_dpdk/memory.o 00:06:25.644 CC lib/env_dpdk/pci.o 00:06:25.644 CC lib/env_dpdk/init.o 00:06:25.644 LIB libspdk_conf.a 00:06:25.644 SO libspdk_conf.so.6.0 00:06:25.644 CC lib/idxd/idxd_kernel.o 00:06:25.644 SYMLINK libspdk_conf.so 00:06:25.644 CC lib/env_dpdk/threads.o 00:06:25.644 LIB libspdk_json.a 00:06:25.644 LIB libspdk_rdma_utils.a 00:06:25.644 SO libspdk_json.so.6.0 00:06:25.904 SO libspdk_rdma_utils.so.1.0 00:06:25.904 SYMLINK libspdk_json.so 00:06:25.904 CC lib/env_dpdk/pci_ioat.o 00:06:25.904 CC lib/env_dpdk/pci_virtio.o 00:06:25.904 SYMLINK libspdk_rdma_utils.so 00:06:25.904 CC lib/env_dpdk/pci_vmd.o 00:06:25.904 CC lib/env_dpdk/pci_idxd.o 00:06:25.904 CC lib/env_dpdk/pci_event.o 00:06:25.904 CC lib/rdma_provider/common.o 00:06:25.904 CC lib/jsonrpc/jsonrpc_server.o 00:06:25.904 CC lib/env_dpdk/sigbus_handler.o 00:06:26.163 CC lib/env_dpdk/pci_dpdk.o 00:06:26.163 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:26.163 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:26.163 LIB libspdk_idxd.a 00:06:26.163 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:26.163 SO libspdk_idxd.so.12.1 00:06:26.163 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:26.163 LIB libspdk_vmd.a 00:06:26.163 SYMLINK libspdk_idxd.so 00:06:26.163 CC lib/jsonrpc/jsonrpc_client.o 00:06:26.163 SO libspdk_vmd.so.6.0 00:06:26.163 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:26.163 SYMLINK libspdk_vmd.so 00:06:26.163 LIB libspdk_rdma_provider.a 00:06:26.163 SO libspdk_rdma_provider.so.7.0 00:06:26.421 SYMLINK libspdk_rdma_provider.so 00:06:26.421 LIB libspdk_jsonrpc.a 00:06:26.421 SO libspdk_jsonrpc.so.6.0 00:06:26.421 SYMLINK libspdk_jsonrpc.so 00:06:26.679 LIB libspdk_env_dpdk.a 00:06:26.679 SO libspdk_env_dpdk.so.15.1 00:06:26.679 CC lib/rpc/rpc.o 00:06:26.938 SYMLINK libspdk_env_dpdk.so 00:06:26.938 LIB libspdk_rpc.a 00:06:26.938 SO libspdk_rpc.so.6.0 00:06:26.938 SYMLINK libspdk_rpc.so 00:06:27.197 CC lib/notify/notify_rpc.o 00:06:27.197 CC lib/notify/notify.o 00:06:27.197 CC lib/trace/trace.o 00:06:27.197 CC lib/trace/trace_flags.o 00:06:27.197 CC lib/keyring/keyring_rpc.o 00:06:27.197 CC lib/trace/trace_rpc.o 00:06:27.197 CC lib/keyring/keyring.o 00:06:27.458 LIB libspdk_notify.a 00:06:27.458 SO libspdk_notify.so.6.0 00:06:27.458 LIB libspdk_trace.a 00:06:27.458 SYMLINK libspdk_notify.so 00:06:27.458 LIB libspdk_keyring.a 00:06:27.458 SO libspdk_trace.so.11.0 00:06:27.458 SO libspdk_keyring.so.2.0 00:06:27.458 SYMLINK libspdk_trace.so 00:06:27.458 SYMLINK libspdk_keyring.so 00:06:27.722 CC lib/thread/thread.o 00:06:27.722 CC lib/thread/iobuf.o 00:06:27.722 CC lib/sock/sock_rpc.o 00:06:27.722 CC lib/sock/sock.o 00:06:28.021 LIB libspdk_sock.a 00:06:28.021 SO libspdk_sock.so.10.0 00:06:28.021 SYMLINK libspdk_sock.so 00:06:28.283 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:28.283 CC lib/nvme/nvme_ctrlr.o 00:06:28.283 CC lib/nvme/nvme_ns_cmd.o 00:06:28.283 CC lib/nvme/nvme_ns.o 00:06:28.283 CC lib/nvme/nvme_pcie_common.o 00:06:28.283 CC lib/nvme/nvme_fabric.o 00:06:28.283 CC lib/nvme/nvme_pcie.o 00:06:28.283 CC lib/nvme/nvme_qpair.o 00:06:28.283 CC lib/nvme/nvme.o 00:06:28.857 CC lib/nvme/nvme_quirks.o 00:06:28.857 CC lib/nvme/nvme_transport.o 00:06:29.120 CC lib/nvme/nvme_discovery.o 00:06:29.120 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:29.120 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:29.120 CC lib/nvme/nvme_tcp.o 00:06:29.120 CC lib/nvme/nvme_opal.o 00:06:29.120 CC lib/nvme/nvme_io_msg.o 00:06:29.120 CC lib/nvme/nvme_poll_group.o 00:06:29.120 LIB libspdk_thread.a 00:06:29.382 SO libspdk_thread.so.11.0 00:06:29.382 SYMLINK libspdk_thread.so 00:06:29.382 CC lib/nvme/nvme_zns.o 00:06:29.382 CC lib/nvme/nvme_stubs.o 00:06:29.382 CC lib/nvme/nvme_auth.o 00:06:29.644 CC lib/nvme/nvme_cuse.o 00:06:29.644 CC lib/nvme/nvme_rdma.o 00:06:29.644 CC lib/accel/accel.o 00:06:29.644 CC lib/blob/blobstore.o 00:06:29.906 CC lib/blob/request.o 00:06:29.906 CC lib/accel/accel_rpc.o 00:06:30.166 CC lib/accel/accel_sw.o 00:06:30.167 CC lib/init/json_config.o 00:06:30.167 CC lib/init/subsystem.o 00:06:30.167 CC lib/virtio/virtio.o 00:06:30.167 CC lib/fsdev/fsdev.o 00:06:30.167 CC lib/init/subsystem_rpc.o 00:06:30.167 CC lib/fsdev/fsdev_io.o 00:06:30.429 CC lib/init/rpc.o 00:06:30.429 CC lib/fsdev/fsdev_rpc.o 00:06:30.429 CC lib/virtio/virtio_vhost_user.o 00:06:30.429 CC lib/virtio/virtio_vfio_user.o 00:06:30.429 CC lib/virtio/virtio_pci.o 00:06:30.429 LIB libspdk_init.a 00:06:30.691 SO libspdk_init.so.6.0 00:06:30.691 CC lib/blob/zeroes.o 00:06:30.691 CC lib/blob/blob_bs_dev.o 00:06:30.691 SYMLINK libspdk_init.so 00:06:30.691 LIB libspdk_virtio.a 00:06:30.691 CC lib/event/app.o 00:06:30.691 CC lib/event/reactor.o 00:06:30.691 CC lib/event/log_rpc.o 00:06:30.691 CC lib/event/scheduler_static.o 00:06:30.691 CC lib/event/app_rpc.o 00:06:30.691 SO libspdk_virtio.so.7.0 00:06:30.950 LIB libspdk_fsdev.a 00:06:30.950 SYMLINK libspdk_virtio.so 00:06:30.950 SO libspdk_fsdev.so.2.0 00:06:30.950 SYMLINK libspdk_fsdev.so 00:06:30.950 LIB libspdk_nvme.a 00:06:31.211 SO libspdk_nvme.so.15.0 00:06:31.211 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:31.211 LIB libspdk_event.a 00:06:31.211 LIB libspdk_accel.a 00:06:31.470 SO libspdk_event.so.14.0 00:06:31.470 SYMLINK libspdk_nvme.so 00:06:31.470 SO libspdk_accel.so.16.0 00:06:31.470 SYMLINK libspdk_event.so 00:06:31.470 SYMLINK libspdk_accel.so 00:06:31.730 CC lib/bdev/bdev_zone.o 00:06:31.730 CC lib/bdev/bdev_rpc.o 00:06:31.730 CC lib/bdev/bdev.o 00:06:31.730 CC lib/bdev/part.o 00:06:31.730 CC lib/bdev/scsi_nvme.o 00:06:31.992 LIB libspdk_fuse_dispatcher.a 00:06:31.992 SO libspdk_fuse_dispatcher.so.1.0 00:06:31.992 SYMLINK libspdk_fuse_dispatcher.so 00:06:32.935 LIB libspdk_blob.a 00:06:32.935 SO libspdk_blob.so.12.0 00:06:33.196 SYMLINK libspdk_blob.so 00:06:33.196 CC lib/lvol/lvol.o 00:06:33.196 CC lib/blobfs/blobfs.o 00:06:33.196 CC lib/blobfs/tree.o 00:06:34.581 LIB libspdk_lvol.a 00:06:34.581 LIB libspdk_blobfs.a 00:06:34.581 SO libspdk_lvol.so.11.0 00:06:34.581 SO libspdk_blobfs.so.11.0 00:06:34.581 SYMLINK libspdk_blobfs.so 00:06:34.581 SYMLINK libspdk_lvol.so 00:06:34.581 LIB libspdk_bdev.a 00:06:34.842 SO libspdk_bdev.so.17.0 00:06:34.842 SYMLINK libspdk_bdev.so 00:06:35.104 CC lib/ftl/ftl_init.o 00:06:35.104 CC lib/ftl/ftl_core.o 00:06:35.104 CC lib/ftl/ftl_layout.o 00:06:35.104 CC lib/ftl/ftl_debug.o 00:06:35.104 CC lib/ftl/ftl_sb.o 00:06:35.104 CC lib/nbd/nbd.o 00:06:35.104 CC lib/ftl/ftl_io.o 00:06:35.104 CC lib/nvmf/ctrlr.o 00:06:35.104 CC lib/ublk/ublk.o 00:06:35.104 CC lib/scsi/dev.o 00:06:35.104 CC lib/scsi/lun.o 00:06:35.104 CC lib/ftl/ftl_l2p.o 00:06:35.104 CC lib/nbd/nbd_rpc.o 00:06:35.365 CC lib/ublk/ublk_rpc.o 00:06:35.365 CC lib/scsi/port.o 00:06:35.365 CC lib/ftl/ftl_l2p_flat.o 00:06:35.365 CC lib/ftl/ftl_nv_cache.o 00:06:35.365 CC lib/scsi/scsi.o 00:06:35.365 CC lib/scsi/scsi_bdev.o 00:06:35.365 CC lib/ftl/ftl_band.o 00:06:35.365 LIB libspdk_nbd.a 00:06:35.365 CC lib/ftl/ftl_band_ops.o 00:06:35.365 CC lib/ftl/ftl_writer.o 00:06:35.365 SO libspdk_nbd.so.7.0 00:06:35.365 SYMLINK libspdk_nbd.so 00:06:35.365 CC lib/ftl/ftl_rq.o 00:06:35.626 CC lib/nvmf/ctrlr_discovery.o 00:06:35.626 CC lib/scsi/scsi_pr.o 00:06:35.626 CC lib/ftl/ftl_reloc.o 00:06:35.626 CC lib/ftl/ftl_l2p_cache.o 00:06:35.626 LIB libspdk_ublk.a 00:06:35.626 SO libspdk_ublk.so.3.0 00:06:35.626 CC lib/ftl/ftl_p2l.o 00:06:35.626 SYMLINK libspdk_ublk.so 00:06:35.626 CC lib/nvmf/ctrlr_bdev.o 00:06:35.886 CC lib/nvmf/subsystem.o 00:06:35.886 CC lib/ftl/ftl_p2l_log.o 00:06:35.886 CC lib/scsi/scsi_rpc.o 00:06:35.886 CC lib/scsi/task.o 00:06:35.886 CC lib/ftl/mngt/ftl_mngt.o 00:06:35.886 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:36.146 LIB libspdk_scsi.a 00:06:36.146 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:36.146 SO libspdk_scsi.so.9.0 00:06:36.146 CC lib/nvmf/nvmf.o 00:06:36.146 SYMLINK libspdk_scsi.so 00:06:36.146 CC lib/nvmf/nvmf_rpc.o 00:06:36.146 CC lib/nvmf/transport.o 00:06:36.146 CC lib/nvmf/tcp.o 00:06:36.406 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:36.406 CC lib/nvmf/stubs.o 00:06:36.406 CC lib/nvmf/mdns_server.o 00:06:36.406 CC lib/iscsi/conn.o 00:06:36.406 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:36.667 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:36.667 CC lib/nvmf/rdma.o 00:06:36.667 CC lib/iscsi/init_grp.o 00:06:36.928 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:36.928 CC lib/vhost/vhost.o 00:06:36.928 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:36.928 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:36.928 CC lib/iscsi/iscsi.o 00:06:36.928 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:36.928 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:36.928 CC lib/iscsi/param.o 00:06:37.188 CC lib/iscsi/portal_grp.o 00:06:37.188 CC lib/vhost/vhost_rpc.o 00:06:37.188 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:37.188 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:37.188 CC lib/iscsi/tgt_node.o 00:06:37.498 CC lib/iscsi/iscsi_subsystem.o 00:06:37.498 CC lib/vhost/vhost_scsi.o 00:06:37.498 CC lib/nvmf/auth.o 00:06:37.498 CC lib/ftl/utils/ftl_conf.o 00:06:37.498 CC lib/vhost/vhost_blk.o 00:06:37.761 CC lib/ftl/utils/ftl_md.o 00:06:37.761 CC lib/vhost/rte_vhost_user.o 00:06:37.761 CC lib/ftl/utils/ftl_mempool.o 00:06:37.761 CC lib/iscsi/iscsi_rpc.o 00:06:38.021 CC lib/ftl/utils/ftl_bitmap.o 00:06:38.021 CC lib/iscsi/task.o 00:06:38.021 CC lib/ftl/utils/ftl_property.o 00:06:38.021 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:38.021 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:38.282 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:38.282 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:38.282 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:38.282 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:38.282 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:38.282 LIB libspdk_iscsi.a 00:06:38.282 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:38.282 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:38.282 SO libspdk_iscsi.so.8.0 00:06:38.282 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:38.282 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:38.542 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:38.542 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:38.542 CC lib/ftl/base/ftl_base_dev.o 00:06:38.542 CC lib/ftl/base/ftl_base_bdev.o 00:06:38.542 CC lib/ftl/ftl_trace.o 00:06:38.542 SYMLINK libspdk_iscsi.so 00:06:38.542 LIB libspdk_nvmf.a 00:06:38.805 SO libspdk_nvmf.so.20.0 00:06:38.805 LIB libspdk_vhost.a 00:06:38.805 LIB libspdk_ftl.a 00:06:38.805 SO libspdk_vhost.so.8.0 00:06:38.805 SYMLINK libspdk_vhost.so 00:06:38.805 SYMLINK libspdk_nvmf.so 00:06:38.805 SO libspdk_ftl.so.9.0 00:06:39.067 SYMLINK libspdk_ftl.so 00:06:39.328 CC module/env_dpdk/env_dpdk_rpc.o 00:06:39.328 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:39.328 CC module/sock/posix/posix.o 00:06:39.328 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:39.328 CC module/accel/ioat/accel_ioat.o 00:06:39.328 CC module/accel/error/accel_error.o 00:06:39.328 CC module/blob/bdev/blob_bdev.o 00:06:39.590 CC module/fsdev/aio/fsdev_aio.o 00:06:39.590 CC module/keyring/file/keyring.o 00:06:39.590 CC module/scheduler/gscheduler/gscheduler.o 00:06:39.590 LIB libspdk_env_dpdk_rpc.a 00:06:39.590 SO libspdk_env_dpdk_rpc.so.6.0 00:06:39.590 LIB libspdk_scheduler_dpdk_governor.a 00:06:39.590 SYMLINK libspdk_env_dpdk_rpc.so 00:06:39.590 CC module/keyring/file/keyring_rpc.o 00:06:39.590 CC module/accel/ioat/accel_ioat_rpc.o 00:06:39.590 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:39.590 LIB libspdk_scheduler_gscheduler.a 00:06:39.590 SO libspdk_scheduler_gscheduler.so.4.0 00:06:39.590 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:39.590 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:39.590 CC module/accel/error/accel_error_rpc.o 00:06:39.590 LIB libspdk_scheduler_dynamic.a 00:06:39.590 SYMLINK libspdk_scheduler_gscheduler.so 00:06:39.590 SO libspdk_scheduler_dynamic.so.4.0 00:06:39.590 LIB libspdk_blob_bdev.a 00:06:39.590 LIB libspdk_accel_ioat.a 00:06:39.590 LIB libspdk_keyring_file.a 00:06:39.590 SO libspdk_blob_bdev.so.12.0 00:06:39.590 SO libspdk_accel_ioat.so.6.0 00:06:39.590 SO libspdk_keyring_file.so.2.0 00:06:39.590 SYMLINK libspdk_scheduler_dynamic.so 00:06:39.852 CC module/fsdev/aio/linux_aio_mgr.o 00:06:39.853 LIB libspdk_accel_error.a 00:06:39.853 SYMLINK libspdk_accel_ioat.so 00:06:39.853 SYMLINK libspdk_blob_bdev.so 00:06:39.853 SYMLINK libspdk_keyring_file.so 00:06:39.853 CC module/keyring/linux/keyring.o 00:06:39.853 CC module/keyring/linux/keyring_rpc.o 00:06:39.853 SO libspdk_accel_error.so.2.0 00:06:39.853 CC module/accel/dsa/accel_dsa.o 00:06:39.853 SYMLINK libspdk_accel_error.so 00:06:39.853 CC module/accel/dsa/accel_dsa_rpc.o 00:06:39.853 CC module/accel/iaa/accel_iaa.o 00:06:39.853 LIB libspdk_keyring_linux.a 00:06:39.853 SO libspdk_keyring_linux.so.1.0 00:06:40.115 CC module/bdev/delay/vbdev_delay.o 00:06:40.115 SYMLINK libspdk_keyring_linux.so 00:06:40.115 CC module/blobfs/bdev/blobfs_bdev.o 00:06:40.115 CC module/bdev/error/vbdev_error.o 00:06:40.115 LIB libspdk_accel_dsa.a 00:06:40.115 CC module/bdev/gpt/gpt.o 00:06:40.115 LIB libspdk_fsdev_aio.a 00:06:40.115 CC module/accel/iaa/accel_iaa_rpc.o 00:06:40.115 SO libspdk_accel_dsa.so.5.0 00:06:40.115 CC module/bdev/lvol/vbdev_lvol.o 00:06:40.115 LIB libspdk_sock_posix.a 00:06:40.115 SO libspdk_fsdev_aio.so.1.0 00:06:40.115 CC module/bdev/malloc/bdev_malloc.o 00:06:40.115 SO libspdk_sock_posix.so.6.0 00:06:40.115 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:40.115 SYMLINK libspdk_fsdev_aio.so 00:06:40.115 SYMLINK libspdk_accel_dsa.so 00:06:40.115 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:40.115 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:40.115 LIB libspdk_accel_iaa.a 00:06:40.115 SYMLINK libspdk_sock_posix.so 00:06:40.115 CC module/bdev/error/vbdev_error_rpc.o 00:06:40.115 SO libspdk_accel_iaa.so.3.0 00:06:40.377 CC module/bdev/gpt/vbdev_gpt.o 00:06:40.377 SYMLINK libspdk_accel_iaa.so 00:06:40.377 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:40.377 LIB libspdk_blobfs_bdev.a 00:06:40.377 SO libspdk_blobfs_bdev.so.6.0 00:06:40.377 LIB libspdk_bdev_delay.a 00:06:40.377 LIB libspdk_bdev_error.a 00:06:40.377 SO libspdk_bdev_delay.so.6.0 00:06:40.377 SYMLINK libspdk_blobfs_bdev.so 00:06:40.377 SO libspdk_bdev_error.so.6.0 00:06:40.377 CC module/bdev/null/bdev_null.o 00:06:40.377 CC module/bdev/nvme/bdev_nvme.o 00:06:40.377 SYMLINK libspdk_bdev_delay.so 00:06:40.377 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:40.377 SYMLINK libspdk_bdev_error.so 00:06:40.377 CC module/bdev/null/bdev_null_rpc.o 00:06:40.639 LIB libspdk_bdev_gpt.a 00:06:40.639 LIB libspdk_bdev_malloc.a 00:06:40.639 LIB libspdk_bdev_lvol.a 00:06:40.639 SO libspdk_bdev_gpt.so.6.0 00:06:40.639 CC module/bdev/passthru/vbdev_passthru.o 00:06:40.639 SO libspdk_bdev_malloc.so.6.0 00:06:40.639 SO libspdk_bdev_lvol.so.6.0 00:06:40.639 CC module/bdev/raid/bdev_raid.o 00:06:40.639 CC module/bdev/raid/bdev_raid_rpc.o 00:06:40.639 CC module/bdev/split/vbdev_split.o 00:06:40.639 SYMLINK libspdk_bdev_gpt.so 00:06:40.639 SYMLINK libspdk_bdev_malloc.so 00:06:40.639 CC module/bdev/raid/bdev_raid_sb.o 00:06:40.639 SYMLINK libspdk_bdev_lvol.so 00:06:40.639 CC module/bdev/raid/raid0.o 00:06:40.639 LIB libspdk_bdev_null.a 00:06:40.639 SO libspdk_bdev_null.so.6.0 00:06:40.639 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:40.907 CC module/bdev/split/vbdev_split_rpc.o 00:06:40.907 SYMLINK libspdk_bdev_null.so 00:06:40.907 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:40.907 CC module/bdev/raid/raid1.o 00:06:40.907 CC module/bdev/raid/concat.o 00:06:40.907 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:40.907 LIB libspdk_bdev_split.a 00:06:40.907 SO libspdk_bdev_split.so.6.0 00:06:40.907 LIB libspdk_bdev_passthru.a 00:06:41.198 SO libspdk_bdev_passthru.so.6.0 00:06:41.198 CC module/bdev/xnvme/bdev_xnvme.o 00:06:41.198 SYMLINK libspdk_bdev_split.so 00:06:41.198 LIB libspdk_bdev_zone_block.a 00:06:41.198 CC module/bdev/aio/bdev_aio.o 00:06:41.198 SYMLINK libspdk_bdev_passthru.so 00:06:41.198 SO libspdk_bdev_zone_block.so.6.0 00:06:41.198 CC module/bdev/nvme/nvme_rpc.o 00:06:41.198 CC module/bdev/nvme/bdev_mdns_client.o 00:06:41.198 SYMLINK libspdk_bdev_zone_block.so 00:06:41.198 CC module/bdev/aio/bdev_aio_rpc.o 00:06:41.198 CC module/bdev/ftl/bdev_ftl.o 00:06:41.198 CC module/bdev/iscsi/bdev_iscsi.o 00:06:41.198 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:41.457 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:41.457 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:06:41.457 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:41.457 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:41.457 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:41.457 LIB libspdk_bdev_raid.a 00:06:41.457 LIB libspdk_bdev_aio.a 00:06:41.457 SO libspdk_bdev_raid.so.6.0 00:06:41.457 SO libspdk_bdev_aio.so.6.0 00:06:41.457 CC module/bdev/nvme/vbdev_opal.o 00:06:41.457 LIB libspdk_bdev_xnvme.a 00:06:41.457 SO libspdk_bdev_xnvme.so.3.0 00:06:41.457 SYMLINK libspdk_bdev_aio.so 00:06:41.457 SYMLINK libspdk_bdev_raid.so 00:06:41.457 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:41.457 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:41.717 SYMLINK libspdk_bdev_xnvme.so 00:06:41.717 LIB libspdk_bdev_iscsi.a 00:06:41.717 LIB libspdk_bdev_ftl.a 00:06:41.717 SO libspdk_bdev_iscsi.so.6.0 00:06:41.717 SO libspdk_bdev_ftl.so.6.0 00:06:41.717 SYMLINK libspdk_bdev_iscsi.so 00:06:41.717 SYMLINK libspdk_bdev_ftl.so 00:06:41.977 LIB libspdk_bdev_virtio.a 00:06:41.977 SO libspdk_bdev_virtio.so.6.0 00:06:41.977 SYMLINK libspdk_bdev_virtio.so 00:06:43.358 LIB libspdk_bdev_nvme.a 00:06:43.358 SO libspdk_bdev_nvme.so.7.1 00:06:43.358 SYMLINK libspdk_bdev_nvme.so 00:06:43.928 CC module/event/subsystems/iobuf/iobuf.o 00:06:43.928 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:43.928 CC module/event/subsystems/sock/sock.o 00:06:43.928 CC module/event/subsystems/scheduler/scheduler.o 00:06:43.928 CC module/event/subsystems/vmd/vmd.o 00:06:43.928 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:43.928 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:43.928 CC module/event/subsystems/keyring/keyring.o 00:06:43.928 CC module/event/subsystems/fsdev/fsdev.o 00:06:43.928 LIB libspdk_event_sock.a 00:06:43.928 LIB libspdk_event_scheduler.a 00:06:43.928 LIB libspdk_event_vhost_blk.a 00:06:43.928 SO libspdk_event_sock.so.5.0 00:06:43.928 LIB libspdk_event_vmd.a 00:06:43.928 LIB libspdk_event_iobuf.a 00:06:43.928 SO libspdk_event_scheduler.so.4.0 00:06:43.928 SO libspdk_event_vmd.so.6.0 00:06:43.928 SO libspdk_event_iobuf.so.3.0 00:06:43.928 SO libspdk_event_vhost_blk.so.3.0 00:06:43.928 SYMLINK libspdk_event_sock.so 00:06:43.928 LIB libspdk_event_keyring.a 00:06:43.928 SYMLINK libspdk_event_scheduler.so 00:06:43.928 SYMLINK libspdk_event_vhost_blk.so 00:06:43.928 SYMLINK libspdk_event_iobuf.so 00:06:44.189 SO libspdk_event_keyring.so.1.0 00:06:44.189 LIB libspdk_event_fsdev.a 00:06:44.189 SYMLINK libspdk_event_vmd.so 00:06:44.189 SO libspdk_event_fsdev.so.1.0 00:06:44.189 SYMLINK libspdk_event_keyring.so 00:06:44.189 SYMLINK libspdk_event_fsdev.so 00:06:44.189 CC module/event/subsystems/accel/accel.o 00:06:44.448 LIB libspdk_event_accel.a 00:06:44.448 SO libspdk_event_accel.so.6.0 00:06:44.448 SYMLINK libspdk_event_accel.so 00:06:44.708 CC module/event/subsystems/bdev/bdev.o 00:06:44.708 LIB libspdk_event_bdev.a 00:06:44.969 SO libspdk_event_bdev.so.6.0 00:06:44.969 SYMLINK libspdk_event_bdev.so 00:06:44.969 CC module/event/subsystems/ublk/ublk.o 00:06:44.969 CC module/event/subsystems/nbd/nbd.o 00:06:44.969 CC module/event/subsystems/scsi/scsi.o 00:06:44.969 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:44.969 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:45.230 LIB libspdk_event_scsi.a 00:06:45.230 LIB libspdk_event_ublk.a 00:06:45.230 LIB libspdk_event_nbd.a 00:06:45.230 SO libspdk_event_scsi.so.6.0 00:06:45.230 SO libspdk_event_nbd.so.6.0 00:06:45.230 SO libspdk_event_ublk.so.3.0 00:06:45.230 SYMLINK libspdk_event_nbd.so 00:06:45.230 SYMLINK libspdk_event_scsi.so 00:06:45.230 SYMLINK libspdk_event_ublk.so 00:06:45.230 LIB libspdk_event_nvmf.a 00:06:45.490 SO libspdk_event_nvmf.so.6.0 00:06:45.490 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:45.490 CC module/event/subsystems/iscsi/iscsi.o 00:06:45.490 SYMLINK libspdk_event_nvmf.so 00:06:45.490 LIB libspdk_event_vhost_scsi.a 00:06:45.490 SO libspdk_event_vhost_scsi.so.3.0 00:06:45.490 LIB libspdk_event_iscsi.a 00:06:45.490 SYMLINK libspdk_event_vhost_scsi.so 00:06:45.490 SO libspdk_event_iscsi.so.6.0 00:06:45.752 SYMLINK libspdk_event_iscsi.so 00:06:45.752 SO libspdk.so.6.0 00:06:45.752 SYMLINK libspdk.so 00:06:46.081 CXX app/trace/trace.o 00:06:46.081 CC test/rpc_client/rpc_client_test.o 00:06:46.081 TEST_HEADER include/spdk/accel.h 00:06:46.081 TEST_HEADER include/spdk/accel_module.h 00:06:46.081 TEST_HEADER include/spdk/assert.h 00:06:46.081 TEST_HEADER include/spdk/barrier.h 00:06:46.081 TEST_HEADER include/spdk/base64.h 00:06:46.081 TEST_HEADER include/spdk/bdev.h 00:06:46.081 TEST_HEADER include/spdk/bdev_module.h 00:06:46.081 TEST_HEADER include/spdk/bdev_zone.h 00:06:46.081 TEST_HEADER include/spdk/bit_array.h 00:06:46.081 TEST_HEADER include/spdk/bit_pool.h 00:06:46.081 TEST_HEADER include/spdk/blob_bdev.h 00:06:46.081 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:46.081 TEST_HEADER include/spdk/blobfs.h 00:06:46.081 TEST_HEADER include/spdk/blob.h 00:06:46.081 TEST_HEADER include/spdk/conf.h 00:06:46.081 TEST_HEADER include/spdk/config.h 00:06:46.081 TEST_HEADER include/spdk/cpuset.h 00:06:46.081 TEST_HEADER include/spdk/crc16.h 00:06:46.081 TEST_HEADER include/spdk/crc32.h 00:06:46.081 TEST_HEADER include/spdk/crc64.h 00:06:46.081 TEST_HEADER include/spdk/dif.h 00:06:46.081 TEST_HEADER include/spdk/dma.h 00:06:46.081 TEST_HEADER include/spdk/endian.h 00:06:46.081 TEST_HEADER include/spdk/env_dpdk.h 00:06:46.081 TEST_HEADER include/spdk/env.h 00:06:46.081 TEST_HEADER include/spdk/event.h 00:06:46.081 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:46.081 TEST_HEADER include/spdk/fd_group.h 00:06:46.081 TEST_HEADER include/spdk/fd.h 00:06:46.081 CC test/thread/poller_perf/poller_perf.o 00:06:46.081 TEST_HEADER include/spdk/file.h 00:06:46.081 CC examples/util/zipf/zipf.o 00:06:46.081 TEST_HEADER include/spdk/fsdev.h 00:06:46.081 TEST_HEADER include/spdk/fsdev_module.h 00:06:46.081 CC examples/ioat/perf/perf.o 00:06:46.081 TEST_HEADER include/spdk/ftl.h 00:06:46.081 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:46.081 TEST_HEADER include/spdk/gpt_spec.h 00:06:46.081 TEST_HEADER include/spdk/hexlify.h 00:06:46.081 TEST_HEADER include/spdk/histogram_data.h 00:06:46.081 TEST_HEADER include/spdk/idxd.h 00:06:46.081 TEST_HEADER include/spdk/idxd_spec.h 00:06:46.081 TEST_HEADER include/spdk/init.h 00:06:46.081 TEST_HEADER include/spdk/ioat.h 00:06:46.081 TEST_HEADER include/spdk/ioat_spec.h 00:06:46.081 TEST_HEADER include/spdk/iscsi_spec.h 00:06:46.081 CC test/dma/test_dma/test_dma.o 00:06:46.081 TEST_HEADER include/spdk/json.h 00:06:46.081 TEST_HEADER include/spdk/jsonrpc.h 00:06:46.081 TEST_HEADER include/spdk/keyring.h 00:06:46.081 TEST_HEADER include/spdk/keyring_module.h 00:06:46.081 TEST_HEADER include/spdk/likely.h 00:06:46.081 TEST_HEADER include/spdk/log.h 00:06:46.081 TEST_HEADER include/spdk/lvol.h 00:06:46.081 TEST_HEADER include/spdk/md5.h 00:06:46.081 CC test/app/bdev_svc/bdev_svc.o 00:06:46.081 TEST_HEADER include/spdk/memory.h 00:06:46.081 TEST_HEADER include/spdk/mmio.h 00:06:46.081 TEST_HEADER include/spdk/nbd.h 00:06:46.081 TEST_HEADER include/spdk/net.h 00:06:46.081 TEST_HEADER include/spdk/notify.h 00:06:46.081 TEST_HEADER include/spdk/nvme.h 00:06:46.081 TEST_HEADER include/spdk/nvme_intel.h 00:06:46.081 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:46.081 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:46.081 TEST_HEADER include/spdk/nvme_spec.h 00:06:46.081 TEST_HEADER include/spdk/nvme_zns.h 00:06:46.081 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:46.081 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:46.081 TEST_HEADER include/spdk/nvmf.h 00:06:46.081 TEST_HEADER include/spdk/nvmf_spec.h 00:06:46.081 TEST_HEADER include/spdk/nvmf_transport.h 00:06:46.081 CC test/env/mem_callbacks/mem_callbacks.o 00:06:46.081 TEST_HEADER include/spdk/opal.h 00:06:46.081 TEST_HEADER include/spdk/opal_spec.h 00:06:46.081 TEST_HEADER include/spdk/pci_ids.h 00:06:46.081 TEST_HEADER include/spdk/pipe.h 00:06:46.081 TEST_HEADER include/spdk/queue.h 00:06:46.081 TEST_HEADER include/spdk/reduce.h 00:06:46.081 TEST_HEADER include/spdk/rpc.h 00:06:46.081 TEST_HEADER include/spdk/scheduler.h 00:06:46.081 TEST_HEADER include/spdk/scsi.h 00:06:46.081 TEST_HEADER include/spdk/scsi_spec.h 00:06:46.081 TEST_HEADER include/spdk/sock.h 00:06:46.081 TEST_HEADER include/spdk/stdinc.h 00:06:46.081 TEST_HEADER include/spdk/string.h 00:06:46.081 TEST_HEADER include/spdk/thread.h 00:06:46.081 TEST_HEADER include/spdk/trace.h 00:06:46.081 TEST_HEADER include/spdk/trace_parser.h 00:06:46.081 TEST_HEADER include/spdk/tree.h 00:06:46.081 LINK rpc_client_test 00:06:46.081 TEST_HEADER include/spdk/ublk.h 00:06:46.081 TEST_HEADER include/spdk/util.h 00:06:46.081 TEST_HEADER include/spdk/uuid.h 00:06:46.081 LINK zipf 00:06:46.081 TEST_HEADER include/spdk/version.h 00:06:46.081 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:46.081 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:46.081 TEST_HEADER include/spdk/vhost.h 00:06:46.081 TEST_HEADER include/spdk/vmd.h 00:06:46.081 TEST_HEADER include/spdk/xor.h 00:06:46.081 LINK poller_perf 00:06:46.081 TEST_HEADER include/spdk/zipf.h 00:06:46.081 CXX test/cpp_headers/accel.o 00:06:46.359 LINK interrupt_tgt 00:06:46.359 LINK bdev_svc 00:06:46.359 LINK ioat_perf 00:06:46.359 CXX test/cpp_headers/accel_module.o 00:06:46.359 CXX test/cpp_headers/assert.o 00:06:46.359 CXX test/cpp_headers/barrier.o 00:06:46.359 LINK spdk_trace 00:06:46.359 CC app/trace_record/trace_record.o 00:06:46.359 CC examples/ioat/verify/verify.o 00:06:46.359 CXX test/cpp_headers/base64.o 00:06:46.359 CC test/event/event_perf/event_perf.o 00:06:46.620 CC test/app/histogram_perf/histogram_perf.o 00:06:46.620 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:46.620 CC test/env/vtophys/vtophys.o 00:06:46.620 LINK spdk_trace_record 00:06:46.620 CC examples/thread/thread/thread_ex.o 00:06:46.620 CXX test/cpp_headers/bdev.o 00:06:46.620 LINK event_perf 00:06:46.620 LINK mem_callbacks 00:06:46.620 LINK histogram_perf 00:06:46.620 LINK verify 00:06:46.620 LINK test_dma 00:06:46.620 LINK vtophys 00:06:46.880 CXX test/cpp_headers/bdev_module.o 00:06:46.880 CC app/nvmf_tgt/nvmf_main.o 00:06:46.880 LINK thread 00:06:46.880 CC test/event/reactor/reactor.o 00:06:46.880 CC app/iscsi_tgt/iscsi_tgt.o 00:06:46.880 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:46.880 CC examples/sock/hello_world/hello_sock.o 00:06:46.880 CC examples/vmd/lsvmd/lsvmd.o 00:06:46.880 CXX test/cpp_headers/bdev_zone.o 00:06:46.880 LINK reactor 00:06:46.880 LINK nvme_fuzz 00:06:46.880 CC examples/idxd/perf/perf.o 00:06:46.880 LINK nvmf_tgt 00:06:47.140 LINK env_dpdk_post_init 00:06:47.140 LINK lsvmd 00:06:47.140 CC test/event/reactor_perf/reactor_perf.o 00:06:47.140 LINK iscsi_tgt 00:06:47.140 LINK hello_sock 00:06:47.140 CXX test/cpp_headers/bit_array.o 00:06:47.140 LINK reactor_perf 00:06:47.140 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:47.140 CC test/env/memory/memory_ut.o 00:06:47.140 CC examples/accel/perf/accel_perf.o 00:06:47.140 CXX test/cpp_headers/bit_pool.o 00:06:47.400 CC examples/vmd/led/led.o 00:06:47.400 CC examples/blob/hello_world/hello_blob.o 00:06:47.400 LINK idxd_perf 00:06:47.400 CC examples/blob/cli/blobcli.o 00:06:47.400 CC app/spdk_tgt/spdk_tgt.o 00:06:47.400 CXX test/cpp_headers/blob_bdev.o 00:06:47.400 LINK led 00:06:47.400 CC test/event/app_repeat/app_repeat.o 00:06:47.661 LINK hello_blob 00:06:47.661 CXX test/cpp_headers/blobfs_bdev.o 00:06:47.661 LINK app_repeat 00:06:47.661 CC test/event/scheduler/scheduler.o 00:06:47.661 LINK spdk_tgt 00:06:47.661 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:47.661 CXX test/cpp_headers/blobfs.o 00:06:47.661 LINK accel_perf 00:06:47.921 LINK blobcli 00:06:47.921 LINK scheduler 00:06:47.921 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:47.921 CC examples/nvme/hello_world/hello_world.o 00:06:47.921 CXX test/cpp_headers/blob.o 00:06:47.921 CC app/spdk_lspci/spdk_lspci.o 00:06:47.921 CC test/env/pci/pci_ut.o 00:06:47.921 LINK hello_fsdev 00:06:47.921 LINK spdk_lspci 00:06:47.921 CXX test/cpp_headers/conf.o 00:06:47.921 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:48.181 LINK hello_world 00:06:48.181 CXX test/cpp_headers/config.o 00:06:48.181 LINK memory_ut 00:06:48.181 CXX test/cpp_headers/cpuset.o 00:06:48.181 CC app/spdk_nvme_perf/perf.o 00:06:48.181 CC examples/bdev/hello_world/hello_bdev.o 00:06:48.181 CC test/accel/dif/dif.o 00:06:48.181 CC examples/bdev/bdevperf/bdevperf.o 00:06:48.181 LINK pci_ut 00:06:48.181 CC examples/nvme/reconnect/reconnect.o 00:06:48.459 CXX test/cpp_headers/crc16.o 00:06:48.459 LINK vhost_fuzz 00:06:48.459 CC test/app/jsoncat/jsoncat.o 00:06:48.459 LINK hello_bdev 00:06:48.459 CXX test/cpp_headers/crc32.o 00:06:48.459 CXX test/cpp_headers/crc64.o 00:06:48.459 CC app/spdk_nvme_identify/identify.o 00:06:48.459 LINK jsoncat 00:06:48.459 LINK reconnect 00:06:48.722 CXX test/cpp_headers/dif.o 00:06:48.722 CXX test/cpp_headers/dma.o 00:06:48.722 CC test/app/stub/stub.o 00:06:48.722 CXX test/cpp_headers/endian.o 00:06:48.722 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:48.722 CC test/blobfs/mkfs/mkfs.o 00:06:48.722 LINK stub 00:06:48.983 LINK spdk_nvme_perf 00:06:48.983 CXX test/cpp_headers/env_dpdk.o 00:06:48.983 LINK dif 00:06:48.983 LINK mkfs 00:06:48.983 CC test/lvol/esnap/esnap.o 00:06:48.983 LINK iscsi_fuzz 00:06:48.983 CC examples/nvme/arbitration/arbitration.o 00:06:48.983 CXX test/cpp_headers/env.o 00:06:48.983 CC examples/nvme/hotplug/hotplug.o 00:06:48.983 LINK bdevperf 00:06:49.244 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:49.244 LINK nvme_manage 00:06:49.244 CXX test/cpp_headers/event.o 00:06:49.244 CC test/nvme/aer/aer.o 00:06:49.244 CC test/nvme/reset/reset.o 00:06:49.244 LINK arbitration 00:06:49.244 CXX test/cpp_headers/fd_group.o 00:06:49.244 LINK cmb_copy 00:06:49.244 LINK hotplug 00:06:49.244 CC test/nvme/sgl/sgl.o 00:06:49.244 CC test/nvme/e2edp/nvme_dp.o 00:06:49.504 CXX test/cpp_headers/fd.o 00:06:49.504 LINK spdk_nvme_identify 00:06:49.504 CXX test/cpp_headers/file.o 00:06:49.504 LINK reset 00:06:49.504 CC test/nvme/overhead/overhead.o 00:06:49.504 CC examples/nvme/abort/abort.o 00:06:49.504 CXX test/cpp_headers/fsdev.o 00:06:49.504 LINK aer 00:06:49.504 CC test/nvme/err_injection/err_injection.o 00:06:49.504 LINK sgl 00:06:49.504 CC app/spdk_nvme_discover/discovery_aer.o 00:06:49.504 LINK nvme_dp 00:06:49.767 CC test/nvme/startup/startup.o 00:06:49.767 CXX test/cpp_headers/fsdev_module.o 00:06:49.767 CXX test/cpp_headers/ftl.o 00:06:49.767 CC test/nvme/reserve/reserve.o 00:06:49.767 LINK err_injection 00:06:49.767 LINK startup 00:06:49.767 CC test/nvme/simple_copy/simple_copy.o 00:06:49.767 LINK overhead 00:06:49.767 LINK spdk_nvme_discover 00:06:49.767 CXX test/cpp_headers/fuse_dispatcher.o 00:06:50.026 CXX test/cpp_headers/gpt_spec.o 00:06:50.026 LINK abort 00:06:50.026 LINK reserve 00:06:50.026 CC test/nvme/connect_stress/connect_stress.o 00:06:50.026 CC test/nvme/boot_partition/boot_partition.o 00:06:50.026 CC test/nvme/compliance/nvme_compliance.o 00:06:50.026 CC app/spdk_top/spdk_top.o 00:06:50.026 LINK simple_copy 00:06:50.026 CXX test/cpp_headers/hexlify.o 00:06:50.026 LINK connect_stress 00:06:50.026 CC test/nvme/fused_ordering/fused_ordering.o 00:06:50.026 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:50.026 LINK boot_partition 00:06:50.026 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:50.285 CXX test/cpp_headers/histogram_data.o 00:06:50.285 CXX test/cpp_headers/idxd.o 00:06:50.285 CC test/nvme/fdp/fdp.o 00:06:50.285 LINK doorbell_aers 00:06:50.285 LINK pmr_persistence 00:06:50.285 LINK nvme_compliance 00:06:50.285 LINK fused_ordering 00:06:50.285 CC test/nvme/cuse/cuse.o 00:06:50.285 CXX test/cpp_headers/idxd_spec.o 00:06:50.285 CC app/vhost/vhost.o 00:06:50.285 CXX test/cpp_headers/init.o 00:06:50.285 CXX test/cpp_headers/ioat.o 00:06:50.545 CXX test/cpp_headers/ioat_spec.o 00:06:50.545 CXX test/cpp_headers/iscsi_spec.o 00:06:50.545 LINK vhost 00:06:50.545 CXX test/cpp_headers/json.o 00:06:50.545 CXX test/cpp_headers/jsonrpc.o 00:06:50.545 LINK fdp 00:06:50.545 CC test/bdev/bdevio/bdevio.o 00:06:50.545 CXX test/cpp_headers/keyring.o 00:06:50.545 CC examples/nvmf/nvmf/nvmf.o 00:06:50.806 CXX test/cpp_headers/keyring_module.o 00:06:50.806 CXX test/cpp_headers/likely.o 00:06:50.806 CXX test/cpp_headers/log.o 00:06:50.806 CC app/spdk_dd/spdk_dd.o 00:06:50.806 CC app/fio/nvme/fio_plugin.o 00:06:50.806 CXX test/cpp_headers/lvol.o 00:06:50.806 CXX test/cpp_headers/md5.o 00:06:50.806 LINK bdevio 00:06:51.068 LINK nvmf 00:06:51.068 LINK spdk_top 00:06:51.068 CC app/fio/bdev/fio_plugin.o 00:06:51.068 CXX test/cpp_headers/memory.o 00:06:51.068 LINK spdk_dd 00:06:51.068 CXX test/cpp_headers/mmio.o 00:06:51.068 CXX test/cpp_headers/nbd.o 00:06:51.068 CXX test/cpp_headers/net.o 00:06:51.068 CXX test/cpp_headers/notify.o 00:06:51.068 CXX test/cpp_headers/nvme.o 00:06:51.068 CXX test/cpp_headers/nvme_intel.o 00:06:51.068 CXX test/cpp_headers/nvme_ocssd.o 00:06:51.068 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:51.399 CXX test/cpp_headers/nvme_spec.o 00:06:51.399 CXX test/cpp_headers/nvme_zns.o 00:06:51.399 LINK spdk_nvme 00:06:51.399 CXX test/cpp_headers/nvmf_cmd.o 00:06:51.399 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:51.399 CXX test/cpp_headers/nvmf.o 00:06:51.399 CXX test/cpp_headers/nvmf_spec.o 00:06:51.399 LINK cuse 00:06:51.399 CXX test/cpp_headers/nvmf_transport.o 00:06:51.399 CXX test/cpp_headers/opal.o 00:06:51.399 CXX test/cpp_headers/opal_spec.o 00:06:51.399 CXX test/cpp_headers/pci_ids.o 00:06:51.399 CXX test/cpp_headers/pipe.o 00:06:51.399 CXX test/cpp_headers/queue.o 00:06:51.399 CXX test/cpp_headers/reduce.o 00:06:51.713 CXX test/cpp_headers/rpc.o 00:06:51.713 LINK spdk_bdev 00:06:51.713 CXX test/cpp_headers/scheduler.o 00:06:51.713 CXX test/cpp_headers/scsi.o 00:06:51.713 CXX test/cpp_headers/scsi_spec.o 00:06:51.713 CXX test/cpp_headers/sock.o 00:06:51.713 CXX test/cpp_headers/stdinc.o 00:06:51.713 CXX test/cpp_headers/string.o 00:06:51.713 CXX test/cpp_headers/thread.o 00:06:51.713 CXX test/cpp_headers/trace.o 00:06:51.713 CXX test/cpp_headers/trace_parser.o 00:06:51.713 CXX test/cpp_headers/tree.o 00:06:51.713 CXX test/cpp_headers/ublk.o 00:06:51.713 CXX test/cpp_headers/util.o 00:06:51.713 CXX test/cpp_headers/uuid.o 00:06:51.713 CXX test/cpp_headers/version.o 00:06:51.713 CXX test/cpp_headers/vfio_user_pci.o 00:06:51.713 CXX test/cpp_headers/vfio_user_spec.o 00:06:51.713 CXX test/cpp_headers/vhost.o 00:06:51.713 CXX test/cpp_headers/vmd.o 00:06:51.713 CXX test/cpp_headers/xor.o 00:06:51.713 CXX test/cpp_headers/zipf.o 00:06:54.318 LINK esnap 00:06:54.606 00:06:54.606 real 1m14.058s 00:06:54.606 user 6m39.996s 00:06:54.606 sys 1m11.311s 00:06:54.606 19:25:13 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:54.606 ************************************ 00:06:54.606 END TEST make 00:06:54.606 ************************************ 00:06:54.606 19:25:13 make -- common/autotest_common.sh@10 -- $ set +x 00:06:54.606 19:25:13 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:54.606 19:25:13 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:54.606 19:25:13 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:54.606 19:25:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:54.606 19:25:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:06:54.606 19:25:13 -- pm/common@44 -- $ pid=5062 00:06:54.606 19:25:13 -- pm/common@50 -- $ kill -TERM 5062 00:06:54.606 19:25:13 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:54.606 19:25:13 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:06:54.606 19:25:13 -- pm/common@44 -- $ pid=5063 00:06:54.606 19:25:13 -- pm/common@50 -- $ kill -TERM 5063 00:06:54.606 19:25:13 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:54.606 19:25:13 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:06:54.606 19:25:13 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:54.606 19:25:13 -- common/autotest_common.sh@1711 -- # lcov --version 00:06:54.606 19:25:13 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:54.606 19:25:13 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:54.606 19:25:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.606 19:25:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.606 19:25:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.606 19:25:13 -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.606 19:25:13 -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.606 19:25:13 -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.606 19:25:13 -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.606 19:25:13 -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.606 19:25:13 -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.606 19:25:13 -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.606 19:25:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.606 19:25:13 -- scripts/common.sh@344 -- # case "$op" in 00:06:54.606 19:25:13 -- scripts/common.sh@345 -- # : 1 00:06:54.606 19:25:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.606 19:25:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.606 19:25:13 -- scripts/common.sh@365 -- # decimal 1 00:06:54.606 19:25:13 -- scripts/common.sh@353 -- # local d=1 00:06:54.606 19:25:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.606 19:25:13 -- scripts/common.sh@355 -- # echo 1 00:06:54.606 19:25:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.606 19:25:13 -- scripts/common.sh@366 -- # decimal 2 00:06:54.606 19:25:13 -- scripts/common.sh@353 -- # local d=2 00:06:54.606 19:25:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.606 19:25:13 -- scripts/common.sh@355 -- # echo 2 00:06:54.606 19:25:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.607 19:25:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.607 19:25:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.607 19:25:13 -- scripts/common.sh@368 -- # return 0 00:06:54.607 19:25:13 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.607 19:25:13 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:54.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.607 --rc genhtml_branch_coverage=1 00:06:54.607 --rc genhtml_function_coverage=1 00:06:54.607 --rc genhtml_legend=1 00:06:54.607 --rc geninfo_all_blocks=1 00:06:54.607 --rc geninfo_unexecuted_blocks=1 00:06:54.607 00:06:54.607 ' 00:06:54.607 19:25:13 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:54.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.607 --rc genhtml_branch_coverage=1 00:06:54.607 --rc genhtml_function_coverage=1 00:06:54.607 --rc genhtml_legend=1 00:06:54.607 --rc geninfo_all_blocks=1 00:06:54.607 --rc geninfo_unexecuted_blocks=1 00:06:54.607 00:06:54.607 ' 00:06:54.607 19:25:13 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:54.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.607 --rc genhtml_branch_coverage=1 00:06:54.607 --rc genhtml_function_coverage=1 00:06:54.607 --rc genhtml_legend=1 00:06:54.607 --rc geninfo_all_blocks=1 00:06:54.607 --rc geninfo_unexecuted_blocks=1 00:06:54.607 00:06:54.607 ' 00:06:54.607 19:25:13 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:54.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.607 --rc genhtml_branch_coverage=1 00:06:54.607 --rc genhtml_function_coverage=1 00:06:54.607 --rc genhtml_legend=1 00:06:54.607 --rc geninfo_all_blocks=1 00:06:54.607 --rc geninfo_unexecuted_blocks=1 00:06:54.607 00:06:54.607 ' 00:06:54.607 19:25:13 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:54.607 19:25:13 -- nvmf/common.sh@7 -- # uname -s 00:06:54.607 19:25:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.607 19:25:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.607 19:25:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.607 19:25:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.607 19:25:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.607 19:25:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.607 19:25:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.607 19:25:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.607 19:25:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.607 19:25:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:54.607 19:25:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:effe6007-2875-4676-b590-7e2fb497993d 00:06:54.607 19:25:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=effe6007-2875-4676-b590-7e2fb497993d 00:06:54.607 19:25:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.607 19:25:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:54.607 19:25:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:54.607 19:25:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.607 19:25:13 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:54.607 19:25:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.607 19:25:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.607 19:25:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.607 19:25:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.607 19:25:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.607 19:25:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.607 19:25:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.607 19:25:13 -- paths/export.sh@5 -- # export PATH 00:06:54.607 19:25:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.607 19:25:13 -- nvmf/common.sh@51 -- # : 0 00:06:54.607 19:25:13 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:54.607 19:25:13 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:54.607 19:25:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:54.607 19:25:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.607 19:25:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.607 19:25:13 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:54.607 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:54.607 19:25:13 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:54.607 19:25:13 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:54.607 19:25:13 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:54.607 19:25:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:54.607 19:25:13 -- spdk/autotest.sh@32 -- # uname -s 00:06:54.607 19:25:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:54.607 19:25:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:54.607 19:25:13 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:54.607 19:25:13 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:06:54.607 19:25:13 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:06:54.607 19:25:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:54.607 19:25:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:54.607 19:25:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:54.607 19:25:13 -- spdk/autotest.sh@48 -- # udevadm_pid=54314 00:06:54.607 19:25:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:54.607 19:25:13 -- pm/common@17 -- # local monitor 00:06:54.607 19:25:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:54.607 19:25:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:54.607 19:25:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:54.607 19:25:13 -- pm/common@25 -- # sleep 1 00:06:54.607 19:25:13 -- pm/common@21 -- # date +%s 00:06:54.607 19:25:13 -- pm/common@21 -- # date +%s 00:06:54.607 19:25:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733426713 00:06:54.607 19:25:13 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733426713 00:06:54.607 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733426713_collect-vmstat.pm.log 00:06:54.607 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733426713_collect-cpu-load.pm.log 00:06:56.001 19:25:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:56.001 19:25:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:56.001 19:25:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:56.001 19:25:14 -- common/autotest_common.sh@10 -- # set +x 00:06:56.001 19:25:14 -- spdk/autotest.sh@59 -- # create_test_list 00:06:56.001 19:25:14 -- common/autotest_common.sh@752 -- # xtrace_disable 00:06:56.001 19:25:14 -- common/autotest_common.sh@10 -- # set +x 00:06:56.001 19:25:14 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:06:56.001 19:25:14 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:06:56.001 19:25:14 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:06:56.001 19:25:14 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:06:56.001 19:25:14 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:06:56.001 19:25:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:56.001 19:25:14 -- common/autotest_common.sh@1457 -- # uname 00:06:56.001 19:25:14 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:06:56.001 19:25:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:56.001 19:25:14 -- common/autotest_common.sh@1477 -- # uname 00:06:56.001 19:25:14 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:06:56.001 19:25:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:56.001 19:25:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:56.001 lcov: LCOV version 1.15 00:06:56.001 19:25:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:07:10.925 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:10.925 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:07:25.871 19:25:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:25.871 19:25:42 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.871 19:25:42 -- common/autotest_common.sh@10 -- # set +x 00:07:25.871 19:25:42 -- spdk/autotest.sh@78 -- # rm -f 00:07:25.872 19:25:42 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:25.872 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:25.872 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:07:25.872 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:07:25.872 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:07:25.872 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:07:25.872 19:25:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:25.872 19:25:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:25.872 19:25:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:25.872 19:25:44 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:07:25.872 19:25:44 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:07:25.872 19:25:44 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:07:25.872 19:25:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:07:25.872 19:25:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:07:25.872 19:25:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:25.872 19:25:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:07:25.872 19:25:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:07:25.872 19:25:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:25.872 19:25:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:07:25.872 19:25:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:07:25.872 19:25:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:25.872 19:25:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:07:25.872 19:25:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:25.872 19:25:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:07:25.872 19:25:44 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:25.872 19:25:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:07:25.872 19:25:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:07:25.872 19:25:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:07:25.872 19:25:44 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:25.872 19:25:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:25.872 19:25:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:25.872 19:25:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:25.872 19:25:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.872 19:25:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:25.872 19:25:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:25.872 19:25:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:25.872 19:25:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:25.872 No valid GPT data, bailing 00:07:25.872 19:25:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:25.872 19:25:44 -- scripts/common.sh@394 -- # pt= 00:07:25.872 19:25:44 -- scripts/common.sh@395 -- # return 1 00:07:25.872 19:25:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:25.872 1+0 records in 00:07:25.872 1+0 records out 00:07:25.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0256299 s, 40.9 MB/s 00:07:25.872 19:25:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.872 19:25:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:25.872 19:25:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:07:25.872 19:25:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:07:25.872 19:25:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:07:25.872 No valid GPT data, bailing 00:07:25.872 19:25:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:07:25.872 19:25:44 -- scripts/common.sh@394 -- # pt= 00:07:25.872 19:25:44 -- scripts/common.sh@395 -- # return 1 00:07:25.872 19:25:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:07:25.872 1+0 records in 00:07:25.872 1+0 records out 00:07:25.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610165 s, 172 MB/s 00:07:25.872 19:25:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.872 19:25:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:25.872 19:25:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:07:25.872 19:25:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:07:25.872 19:25:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:07:25.872 No valid GPT data, bailing 00:07:25.872 19:25:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:07:25.872 19:25:44 -- scripts/common.sh@394 -- # pt= 00:07:25.872 19:25:44 -- scripts/common.sh@395 -- # return 1 00:07:25.872 19:25:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:07:25.872 1+0 records in 00:07:25.872 1+0 records out 00:07:25.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660125 s, 159 MB/s 00:07:25.872 19:25:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.872 19:25:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:25.872 19:25:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:07:25.872 19:25:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:07:25.872 19:25:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:07:25.872 No valid GPT data, bailing 00:07:25.872 19:25:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:07:25.872 19:25:44 -- scripts/common.sh@394 -- # pt= 00:07:25.872 19:25:44 -- scripts/common.sh@395 -- # return 1 00:07:25.872 19:25:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:07:25.872 1+0 records in 00:07:25.872 1+0 records out 00:07:25.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00589924 s, 178 MB/s 00:07:25.872 19:25:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.872 19:25:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:25.872 19:25:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:07:25.872 19:25:44 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:07:25.872 19:25:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:07:25.872 No valid GPT data, bailing 00:07:25.873 19:25:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:07:25.873 19:25:44 -- scripts/common.sh@394 -- # pt= 00:07:25.873 19:25:44 -- scripts/common.sh@395 -- # return 1 00:07:25.873 19:25:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:07:25.873 1+0 records in 00:07:25.873 1+0 records out 00:07:25.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598936 s, 175 MB/s 00:07:25.873 19:25:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:25.873 19:25:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:25.873 19:25:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:07:25.873 19:25:44 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:07:25.873 19:25:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:07:25.873 No valid GPT data, bailing 00:07:25.873 19:25:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:07:25.873 19:25:44 -- scripts/common.sh@394 -- # pt= 00:07:25.873 19:25:44 -- scripts/common.sh@395 -- # return 1 00:07:25.873 19:25:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:07:25.873 1+0 records in 00:07:25.873 1+0 records out 00:07:25.873 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658985 s, 159 MB/s 00:07:25.873 19:25:44 -- spdk/autotest.sh@105 -- # sync 00:07:25.873 19:25:44 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:25.873 19:25:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:25.873 19:25:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:27.788 19:25:46 -- spdk/autotest.sh@111 -- # uname -s 00:07:27.788 19:25:46 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:27.788 19:25:46 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:27.788 19:25:46 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:07:28.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:28.358 Hugepages 00:07:28.358 node hugesize free / total 00:07:28.358 node0 1048576kB 0 / 0 00:07:28.358 node0 2048kB 0 / 0 00:07:28.358 00:07:28.358 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:28.625 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:07:28.625 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:07:28.625 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:07:28.625 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:07:28.888 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:07:28.888 19:25:47 -- spdk/autotest.sh@117 -- # uname -s 00:07:28.888 19:25:47 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:28.888 19:25:47 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:28.888 19:25:47 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:29.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:30.034 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.034 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.034 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.034 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:30.034 19:25:48 -- common/autotest_common.sh@1517 -- # sleep 1 00:07:30.979 19:25:49 -- common/autotest_common.sh@1518 -- # bdfs=() 00:07:30.979 19:25:49 -- common/autotest_common.sh@1518 -- # local bdfs 00:07:30.979 19:25:49 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:07:30.979 19:25:49 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:07:30.979 19:25:49 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:30.979 19:25:49 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:30.979 19:25:49 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:30.979 19:25:49 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:30.979 19:25:49 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:31.244 19:25:50 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:31.244 19:25:50 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:31.244 19:25:50 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:31.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:31.517 Waiting for block devices as requested 00:07:31.778 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.778 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:31.778 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:32.039 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:37.330 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:37.330 19:25:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:37.330 19:25:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:07:37.330 19:25:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:07:37.330 19:25:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:37.330 19:25:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:37.330 19:25:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:07:37.330 19:25:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:07:37.330 19:25:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:07:37.330 19:25:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:07:37.330 19:25:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:07:37.330 19:25:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:07:37.330 19:25:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:37.330 19:25:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:37.330 19:25:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:37.330 19:25:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:37.330 19:25:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:37.330 19:25:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:07:37.330 19:25:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:37.330 19:25:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:37.330 19:25:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:37.330 19:25:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:37.330 19:25:55 -- common/autotest_common.sh@1543 -- # continue 00:07:37.330 19:25:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:37.330 19:25:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:07:37.330 19:25:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:37.330 19:25:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:07:37.330 19:25:55 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:37.330 19:25:55 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:07:37.330 19:25:55 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:07:37.330 19:25:55 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:07:37.330 19:25:55 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:07:37.330 19:25:55 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:07:37.330 19:25:55 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:37.330 19:25:55 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:37.330 19:25:55 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:07:37.330 19:25:55 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:37.330 19:25:55 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:37.330 19:25:55 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:37.330 19:25:55 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:07:37.330 19:25:55 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:37.330 19:25:55 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:37.330 19:25:55 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:37.330 19:25:55 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:37.330 19:25:55 -- common/autotest_common.sh@1543 -- # continue 00:07:37.330 19:25:55 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:37.330 19:25:55 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:07:37.330 19:25:55 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:37.330 19:25:55 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:07:37.330 19:25:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:07:37.330 19:25:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:07:37.330 19:25:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:37.330 19:25:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:37.330 19:25:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:37.330 19:25:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:37.330 19:25:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:37.330 19:25:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:37.330 19:25:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:37.330 19:25:56 -- common/autotest_common.sh@1543 -- # continue 00:07:37.330 19:25:56 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:07:37.330 19:25:56 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:07:37.330 19:25:56 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:07:37.330 19:25:56 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:07:37.330 19:25:56 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:37.330 19:25:56 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:07:37.330 19:25:56 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:07:37.330 19:25:56 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:07:37.330 19:25:56 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:07:37.330 19:25:56 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:07:37.330 19:25:56 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:07:37.330 19:25:56 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1531 -- # grep oacs 00:07:37.330 19:25:56 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:07:37.330 19:25:56 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:07:37.330 19:25:56 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:07:37.330 19:25:56 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:07:37.330 19:25:56 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:07:37.330 19:25:56 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:07:37.330 19:25:56 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:07:37.330 19:25:56 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:07:37.330 19:25:56 -- common/autotest_common.sh@1543 -- # continue 00:07:37.330 19:25:56 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:37.330 19:25:56 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.330 19:25:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.330 19:25:56 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:37.330 19:25:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.330 19:25:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.330 19:25:56 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:37.591 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.277 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.277 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.277 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.277 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:38.543 19:25:57 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:38.543 19:25:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:38.543 19:25:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.543 19:25:57 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:38.543 19:25:57 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:07:38.543 19:25:57 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:07:38.543 19:25:57 -- common/autotest_common.sh@1563 -- # bdfs=() 00:07:38.543 19:25:57 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:07:38.543 19:25:57 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:07:38.543 19:25:57 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:07:38.543 19:25:57 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:07:38.543 19:25:57 -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:38.543 19:25:57 -- common/autotest_common.sh@1498 -- # local bdfs 00:07:38.543 19:25:57 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:38.543 19:25:57 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:38.543 19:25:57 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:38.543 19:25:57 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:38.543 19:25:57 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:38.543 19:25:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:38.543 19:25:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:07:38.543 19:25:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:38.543 19:25:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:38.543 19:25:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:38.543 19:25:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:07:38.543 19:25:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:38.544 19:25:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:38.544 19:25:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:38.544 19:25:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:07:38.544 19:25:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:38.544 19:25:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:38.544 19:25:57 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:07:38.544 19:25:57 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:07:38.544 19:25:57 -- common/autotest_common.sh@1566 -- # device=0x0010 00:07:38.544 19:25:57 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:07:38.544 19:25:57 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:07:38.544 19:25:57 -- common/autotest_common.sh@1572 -- # return 0 00:07:38.544 19:25:57 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:07:38.544 19:25:57 -- common/autotest_common.sh@1580 -- # return 0 00:07:38.544 19:25:57 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:38.544 19:25:57 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:38.544 19:25:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:38.544 19:25:57 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:38.544 19:25:57 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:38.544 19:25:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:38.544 19:25:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.544 19:25:57 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:38.544 19:25:57 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:38.544 19:25:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.544 19:25:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.544 19:25:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.544 ************************************ 00:07:38.544 START TEST env 00:07:38.544 ************************************ 00:07:38.544 19:25:57 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:07:38.806 * Looking for test storage... 00:07:38.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:07:38.806 19:25:57 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:38.806 19:25:57 env -- common/autotest_common.sh@1711 -- # lcov --version 00:07:38.806 19:25:57 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:38.806 19:25:57 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:38.806 19:25:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:38.806 19:25:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:38.806 19:25:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:38.806 19:25:57 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.806 19:25:57 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:38.806 19:25:57 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:38.806 19:25:57 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:38.806 19:25:57 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:38.806 19:25:57 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:38.806 19:25:57 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:38.806 19:25:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:38.806 19:25:57 env -- scripts/common.sh@344 -- # case "$op" in 00:07:38.806 19:25:57 env -- scripts/common.sh@345 -- # : 1 00:07:38.806 19:25:57 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:38.806 19:25:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.806 19:25:57 env -- scripts/common.sh@365 -- # decimal 1 00:07:38.806 19:25:57 env -- scripts/common.sh@353 -- # local d=1 00:07:38.806 19:25:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.806 19:25:57 env -- scripts/common.sh@355 -- # echo 1 00:07:38.806 19:25:57 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:38.806 19:25:57 env -- scripts/common.sh@366 -- # decimal 2 00:07:38.806 19:25:57 env -- scripts/common.sh@353 -- # local d=2 00:07:38.806 19:25:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.806 19:25:57 env -- scripts/common.sh@355 -- # echo 2 00:07:38.806 19:25:57 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:38.806 19:25:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:38.806 19:25:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:38.806 19:25:57 env -- scripts/common.sh@368 -- # return 0 00:07:38.806 19:25:57 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.806 19:25:57 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:38.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.806 --rc genhtml_branch_coverage=1 00:07:38.806 --rc genhtml_function_coverage=1 00:07:38.806 --rc genhtml_legend=1 00:07:38.806 --rc geninfo_all_blocks=1 00:07:38.806 --rc geninfo_unexecuted_blocks=1 00:07:38.806 00:07:38.806 ' 00:07:38.807 19:25:57 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:38.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.807 --rc genhtml_branch_coverage=1 00:07:38.807 --rc genhtml_function_coverage=1 00:07:38.807 --rc genhtml_legend=1 00:07:38.807 --rc geninfo_all_blocks=1 00:07:38.807 --rc geninfo_unexecuted_blocks=1 00:07:38.807 00:07:38.807 ' 00:07:38.807 19:25:57 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:38.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.807 --rc genhtml_branch_coverage=1 00:07:38.807 --rc genhtml_function_coverage=1 00:07:38.807 --rc genhtml_legend=1 00:07:38.807 --rc geninfo_all_blocks=1 00:07:38.807 --rc geninfo_unexecuted_blocks=1 00:07:38.807 00:07:38.807 ' 00:07:38.807 19:25:57 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:38.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.807 --rc genhtml_branch_coverage=1 00:07:38.807 --rc genhtml_function_coverage=1 00:07:38.807 --rc genhtml_legend=1 00:07:38.807 --rc geninfo_all_blocks=1 00:07:38.807 --rc geninfo_unexecuted_blocks=1 00:07:38.807 00:07:38.807 ' 00:07:38.807 19:25:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:38.807 19:25:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.807 19:25:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.807 19:25:57 env -- common/autotest_common.sh@10 -- # set +x 00:07:38.807 ************************************ 00:07:38.807 START TEST env_memory 00:07:38.807 ************************************ 00:07:38.807 19:25:57 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:07:38.807 00:07:38.807 00:07:38.807 CUnit - A unit testing framework for C - Version 2.1-3 00:07:38.807 http://cunit.sourceforge.net/ 00:07:38.807 00:07:38.807 00:07:38.807 Suite: memory 00:07:39.068 Test: alloc and free memory map ...[2024-12-05 19:25:57.858469] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:39.068 passed 00:07:39.068 Test: mem map translation ...[2024-12-05 19:25:57.898306] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:39.069 [2024-12-05 19:25:57.898389] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:39.069 [2024-12-05 19:25:57.898456] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:39.069 [2024-12-05 19:25:57.898472] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:39.069 passed 00:07:39.069 Test: mem map registration ...[2024-12-05 19:25:57.966957] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:39.069 [2024-12-05 19:25:57.967030] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:39.069 passed 00:07:39.069 Test: mem map adjacent registrations ...passed 00:07:39.069 00:07:39.069 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.069 suites 1 1 n/a 0 0 00:07:39.069 tests 4 4 4 0 0 00:07:39.069 asserts 152 152 152 0 n/a 00:07:39.069 00:07:39.069 Elapsed time = 0.385 seconds 00:07:39.069 00:07:39.069 real 0m0.420s 00:07:39.069 user 0m0.388s 00:07:39.069 sys 0m0.024s 00:07:39.069 19:25:58 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.069 ************************************ 00:07:39.069 END TEST env_memory 00:07:39.069 ************************************ 00:07:39.069 19:25:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 19:25:58 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:39.330 19:25:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.330 19:25:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.330 19:25:58 env -- common/autotest_common.sh@10 -- # set +x 00:07:39.330 ************************************ 00:07:39.330 START TEST env_vtophys 00:07:39.330 ************************************ 00:07:39.330 19:25:58 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:07:39.330 EAL: lib.eal log level changed from notice to debug 00:07:39.330 EAL: Detected lcore 0 as core 0 on socket 0 00:07:39.330 EAL: Detected lcore 1 as core 0 on socket 0 00:07:39.330 EAL: Detected lcore 2 as core 0 on socket 0 00:07:39.330 EAL: Detected lcore 3 as core 0 on socket 0 00:07:39.330 EAL: Detected lcore 4 as core 0 on socket 0 00:07:39.330 EAL: Detected lcore 5 as core 0 on socket 0 00:07:39.330 EAL: Detected lcore 6 as core 0 on socket 0 00:07:39.330 EAL: Detected lcore 7 as core 0 on socket 0 00:07:39.330 EAL: Detected lcore 8 as core 0 on socket 0 00:07:39.330 EAL: Detected lcore 9 as core 0 on socket 0 00:07:39.330 EAL: Maximum logical cores by configuration: 128 00:07:39.330 EAL: Detected CPU lcores: 10 00:07:39.330 EAL: Detected NUMA nodes: 1 00:07:39.330 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:39.330 EAL: Detected shared linkage of DPDK 00:07:39.330 EAL: No shared files mode enabled, IPC will be disabled 00:07:39.330 EAL: Selected IOVA mode 'PA' 00:07:39.330 EAL: Probing VFIO support... 00:07:39.330 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:39.330 EAL: VFIO modules not loaded, skipping VFIO support... 00:07:39.330 EAL: Ask a virtual area of 0x2e000 bytes 00:07:39.330 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:39.330 EAL: Setting up physically contiguous memory... 00:07:39.330 EAL: Setting maximum number of open files to 524288 00:07:39.330 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:39.330 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:39.330 EAL: Ask a virtual area of 0x61000 bytes 00:07:39.330 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:39.330 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:39.330 EAL: Ask a virtual area of 0x400000000 bytes 00:07:39.330 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:39.330 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:39.330 EAL: Ask a virtual area of 0x61000 bytes 00:07:39.330 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:39.330 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:39.330 EAL: Ask a virtual area of 0x400000000 bytes 00:07:39.330 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:39.330 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:39.330 EAL: Ask a virtual area of 0x61000 bytes 00:07:39.330 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:39.330 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:39.330 EAL: Ask a virtual area of 0x400000000 bytes 00:07:39.330 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:39.330 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:39.330 EAL: Ask a virtual area of 0x61000 bytes 00:07:39.330 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:39.330 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:39.330 EAL: Ask a virtual area of 0x400000000 bytes 00:07:39.330 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:39.330 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:39.330 EAL: Hugepages will be freed exactly as allocated. 00:07:39.330 EAL: No shared files mode enabled, IPC is disabled 00:07:39.330 EAL: No shared files mode enabled, IPC is disabled 00:07:39.330 EAL: TSC frequency is ~2600000 KHz 00:07:39.330 EAL: Main lcore 0 is ready (tid=7fa47a8d3a40;cpuset=[0]) 00:07:39.330 EAL: Trying to obtain current memory policy. 00:07:39.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.330 EAL: Restoring previous memory policy: 0 00:07:39.330 EAL: request: mp_malloc_sync 00:07:39.330 EAL: No shared files mode enabled, IPC is disabled 00:07:39.330 EAL: Heap on socket 0 was expanded by 2MB 00:07:39.330 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:07:39.330 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:39.330 EAL: Mem event callback 'spdk:(nil)' registered 00:07:39.330 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:07:39.592 00:07:39.592 00:07:39.592 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.592 http://cunit.sourceforge.net/ 00:07:39.592 00:07:39.592 00:07:39.592 Suite: components_suite 00:07:39.854 Test: vtophys_malloc_test ...passed 00:07:39.854 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:39.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.854 EAL: Restoring previous memory policy: 4 00:07:39.854 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.854 EAL: request: mp_malloc_sync 00:07:39.854 EAL: No shared files mode enabled, IPC is disabled 00:07:39.854 EAL: Heap on socket 0 was expanded by 4MB 00:07:39.854 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.854 EAL: request: mp_malloc_sync 00:07:39.854 EAL: No shared files mode enabled, IPC is disabled 00:07:39.854 EAL: Heap on socket 0 was shrunk by 4MB 00:07:39.854 EAL: Trying to obtain current memory policy. 00:07:39.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.854 EAL: Restoring previous memory policy: 4 00:07:39.854 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.854 EAL: request: mp_malloc_sync 00:07:39.854 EAL: No shared files mode enabled, IPC is disabled 00:07:39.854 EAL: Heap on socket 0 was expanded by 6MB 00:07:39.854 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.854 EAL: request: mp_malloc_sync 00:07:39.854 EAL: No shared files mode enabled, IPC is disabled 00:07:39.854 EAL: Heap on socket 0 was shrunk by 6MB 00:07:39.855 EAL: Trying to obtain current memory policy. 00:07:39.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.855 EAL: Restoring previous memory policy: 4 00:07:39.855 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.855 EAL: request: mp_malloc_sync 00:07:39.855 EAL: No shared files mode enabled, IPC is disabled 00:07:39.855 EAL: Heap on socket 0 was expanded by 10MB 00:07:39.855 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.855 EAL: request: mp_malloc_sync 00:07:39.855 EAL: No shared files mode enabled, IPC is disabled 00:07:39.855 EAL: Heap on socket 0 was shrunk by 10MB 00:07:39.855 EAL: Trying to obtain current memory policy. 00:07:39.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.855 EAL: Restoring previous memory policy: 4 00:07:39.855 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.855 EAL: request: mp_malloc_sync 00:07:39.855 EAL: No shared files mode enabled, IPC is disabled 00:07:39.855 EAL: Heap on socket 0 was expanded by 18MB 00:07:39.855 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.855 EAL: request: mp_malloc_sync 00:07:39.855 EAL: No shared files mode enabled, IPC is disabled 00:07:39.855 EAL: Heap on socket 0 was shrunk by 18MB 00:07:39.855 EAL: Trying to obtain current memory policy. 00:07:39.855 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:39.855 EAL: Restoring previous memory policy: 4 00:07:39.855 EAL: Calling mem event callback 'spdk:(nil)' 00:07:39.855 EAL: request: mp_malloc_sync 00:07:39.855 EAL: No shared files mode enabled, IPC is disabled 00:07:39.855 EAL: Heap on socket 0 was expanded by 34MB 00:07:40.116 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.116 EAL: request: mp_malloc_sync 00:07:40.116 EAL: No shared files mode enabled, IPC is disabled 00:07:40.116 EAL: Heap on socket 0 was shrunk by 34MB 00:07:40.116 EAL: Trying to obtain current memory policy. 00:07:40.116 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.116 EAL: Restoring previous memory policy: 4 00:07:40.116 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.116 EAL: request: mp_malloc_sync 00:07:40.116 EAL: No shared files mode enabled, IPC is disabled 00:07:40.116 EAL: Heap on socket 0 was expanded by 66MB 00:07:40.116 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.116 EAL: request: mp_malloc_sync 00:07:40.116 EAL: No shared files mode enabled, IPC is disabled 00:07:40.116 EAL: Heap on socket 0 was shrunk by 66MB 00:07:40.116 EAL: Trying to obtain current memory policy. 00:07:40.116 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.376 EAL: Restoring previous memory policy: 4 00:07:40.376 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.376 EAL: request: mp_malloc_sync 00:07:40.376 EAL: No shared files mode enabled, IPC is disabled 00:07:40.376 EAL: Heap on socket 0 was expanded by 130MB 00:07:40.376 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.376 EAL: request: mp_malloc_sync 00:07:40.376 EAL: No shared files mode enabled, IPC is disabled 00:07:40.376 EAL: Heap on socket 0 was shrunk by 130MB 00:07:40.634 EAL: Trying to obtain current memory policy. 00:07:40.634 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:40.634 EAL: Restoring previous memory policy: 4 00:07:40.634 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.634 EAL: request: mp_malloc_sync 00:07:40.634 EAL: No shared files mode enabled, IPC is disabled 00:07:40.634 EAL: Heap on socket 0 was expanded by 258MB 00:07:40.894 EAL: Calling mem event callback 'spdk:(nil)' 00:07:40.894 EAL: request: mp_malloc_sync 00:07:40.894 EAL: No shared files mode enabled, IPC is disabled 00:07:40.894 EAL: Heap on socket 0 was shrunk by 258MB 00:07:41.155 EAL: Trying to obtain current memory policy. 00:07:41.155 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:41.421 EAL: Restoring previous memory policy: 4 00:07:41.421 EAL: Calling mem event callback 'spdk:(nil)' 00:07:41.421 EAL: request: mp_malloc_sync 00:07:41.421 EAL: No shared files mode enabled, IPC is disabled 00:07:41.421 EAL: Heap on socket 0 was expanded by 514MB 00:07:41.996 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.258 EAL: request: mp_malloc_sync 00:07:42.258 EAL: No shared files mode enabled, IPC is disabled 00:07:42.258 EAL: Heap on socket 0 was shrunk by 514MB 00:07:42.828 EAL: Trying to obtain current memory policy. 00:07:42.828 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:42.828 EAL: Restoring previous memory policy: 4 00:07:42.828 EAL: Calling mem event callback 'spdk:(nil)' 00:07:42.828 EAL: request: mp_malloc_sync 00:07:42.828 EAL: No shared files mode enabled, IPC is disabled 00:07:42.828 EAL: Heap on socket 0 was expanded by 1026MB 00:07:44.223 EAL: Calling mem event callback 'spdk:(nil)' 00:07:44.483 EAL: request: mp_malloc_sync 00:07:44.483 EAL: No shared files mode enabled, IPC is disabled 00:07:44.483 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:45.424 passed 00:07:45.424 00:07:45.424 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.424 suites 1 1 n/a 0 0 00:07:45.424 tests 2 2 2 0 0 00:07:45.424 asserts 5775 5775 5775 0 n/a 00:07:45.424 00:07:45.424 Elapsed time = 5.901 seconds 00:07:45.424 EAL: Calling mem event callback 'spdk:(nil)' 00:07:45.424 EAL: request: mp_malloc_sync 00:07:45.424 EAL: No shared files mode enabled, IPC is disabled 00:07:45.424 EAL: Heap on socket 0 was shrunk by 2MB 00:07:45.424 EAL: No shared files mode enabled, IPC is disabled 00:07:45.424 EAL: No shared files mode enabled, IPC is disabled 00:07:45.424 EAL: No shared files mode enabled, IPC is disabled 00:07:45.424 00:07:45.424 real 0m6.205s 00:07:45.424 user 0m5.024s 00:07:45.424 sys 0m1.010s 00:07:45.424 19:26:04 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.424 ************************************ 00:07:45.424 END TEST env_vtophys 00:07:45.424 ************************************ 00:07:45.424 19:26:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:45.424 19:26:04 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:45.424 19:26:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.424 19:26:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.424 19:26:04 env -- common/autotest_common.sh@10 -- # set +x 00:07:45.424 ************************************ 00:07:45.424 START TEST env_pci 00:07:45.424 ************************************ 00:07:45.424 19:26:04 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:07:45.687 00:07:45.687 00:07:45.687 CUnit - A unit testing framework for C - Version 2.1-3 00:07:45.687 http://cunit.sourceforge.net/ 00:07:45.687 00:07:45.687 00:07:45.687 Suite: pci 00:07:45.687 Test: pci_hook ...[2024-12-05 19:26:04.438147] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57084 has claimed it 00:07:45.687 passed 00:07:45.687 00:07:45.687 Run Summary: Type Total Ran Passed Failed Inactive 00:07:45.687 suites 1 1 n/a 0 0 00:07:45.687 tests 1 1 1 0 0 00:07:45.687 asserts 25 25 25 0 n/a 00:07:45.687 00:07:45.687 Elapsed time = 0.007 seconds 00:07:45.687 EAL: Cannot find device (10000:00:01.0) 00:07:45.687 EAL: Failed to attach device on primary process 00:07:45.687 00:07:45.687 real 0m0.066s 00:07:45.687 user 0m0.027s 00:07:45.687 sys 0m0.038s 00:07:45.687 19:26:04 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.687 ************************************ 00:07:45.687 END TEST env_pci 00:07:45.687 ************************************ 00:07:45.687 19:26:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:45.687 19:26:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:45.687 19:26:04 env -- env/env.sh@15 -- # uname 00:07:45.687 19:26:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:45.687 19:26:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:45.687 19:26:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:45.687 19:26:04 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:45.687 19:26:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.687 19:26:04 env -- common/autotest_common.sh@10 -- # set +x 00:07:45.687 ************************************ 00:07:45.687 START TEST env_dpdk_post_init 00:07:45.687 ************************************ 00:07:45.687 19:26:04 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:45.687 EAL: Detected CPU lcores: 10 00:07:45.687 EAL: Detected NUMA nodes: 1 00:07:45.687 EAL: Detected shared linkage of DPDK 00:07:45.687 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:45.687 EAL: Selected IOVA mode 'PA' 00:07:45.947 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:45.947 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:07:45.947 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:07:45.947 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:07:45.947 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:07:45.947 Starting DPDK initialization... 00:07:45.947 Starting SPDK post initialization... 00:07:45.947 SPDK NVMe probe 00:07:45.947 Attaching to 0000:00:10.0 00:07:45.947 Attaching to 0000:00:11.0 00:07:45.947 Attaching to 0000:00:12.0 00:07:45.947 Attaching to 0000:00:13.0 00:07:45.947 Attached to 0000:00:10.0 00:07:45.947 Attached to 0000:00:11.0 00:07:45.947 Attached to 0000:00:13.0 00:07:45.947 Attached to 0000:00:12.0 00:07:45.947 Cleaning up... 00:07:45.947 00:07:45.947 real 0m0.273s 00:07:45.947 user 0m0.092s 00:07:45.947 sys 0m0.081s 00:07:45.947 19:26:04 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.947 ************************************ 00:07:45.947 END TEST env_dpdk_post_init 00:07:45.947 ************************************ 00:07:45.947 19:26:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:45.947 19:26:04 env -- env/env.sh@26 -- # uname 00:07:45.947 19:26:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:45.947 19:26:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:45.947 19:26:04 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.947 19:26:04 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.947 19:26:04 env -- common/autotest_common.sh@10 -- # set +x 00:07:45.947 ************************************ 00:07:45.947 START TEST env_mem_callbacks 00:07:45.947 ************************************ 00:07:45.947 19:26:04 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:07:45.947 EAL: Detected CPU lcores: 10 00:07:45.947 EAL: Detected NUMA nodes: 1 00:07:45.947 EAL: Detected shared linkage of DPDK 00:07:45.947 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:45.947 EAL: Selected IOVA mode 'PA' 00:07:46.206 00:07:46.206 00:07:46.206 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.206 http://cunit.sourceforge.net/ 00:07:46.206 00:07:46.206 00:07:46.206 Suite: memory 00:07:46.206 Test: test ... 00:07:46.206 register 0x200000200000 2097152 00:07:46.206 malloc 3145728 00:07:46.206 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:46.206 register 0x200000400000 4194304 00:07:46.206 buf 0x2000004fffc0 len 3145728 PASSED 00:07:46.206 malloc 64 00:07:46.206 buf 0x2000004ffec0 len 64 PASSED 00:07:46.206 malloc 4194304 00:07:46.206 register 0x200000800000 6291456 00:07:46.206 buf 0x2000009fffc0 len 4194304 PASSED 00:07:46.206 free 0x2000004fffc0 3145728 00:07:46.206 free 0x2000004ffec0 64 00:07:46.206 unregister 0x200000400000 4194304 PASSED 00:07:46.206 free 0x2000009fffc0 4194304 00:07:46.206 unregister 0x200000800000 6291456 PASSED 00:07:46.206 malloc 8388608 00:07:46.206 register 0x200000400000 10485760 00:07:46.206 buf 0x2000005fffc0 len 8388608 PASSED 00:07:46.206 free 0x2000005fffc0 8388608 00:07:46.206 unregister 0x200000400000 10485760 PASSED 00:07:46.206 passed 00:07:46.206 00:07:46.206 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.206 suites 1 1 n/a 0 0 00:07:46.206 tests 1 1 1 0 0 00:07:46.206 asserts 15 15 15 0 n/a 00:07:46.206 00:07:46.206 Elapsed time = 0.063 seconds 00:07:46.206 00:07:46.206 real 0m0.243s 00:07:46.206 user 0m0.087s 00:07:46.206 sys 0m0.053s 00:07:46.206 ************************************ 00:07:46.206 END TEST env_mem_callbacks 00:07:46.206 ************************************ 00:07:46.206 19:26:05 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.206 19:26:05 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:46.206 00:07:46.206 real 0m7.692s 00:07:46.206 user 0m5.761s 00:07:46.206 sys 0m1.439s 00:07:46.206 19:26:05 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.206 19:26:05 env -- common/autotest_common.sh@10 -- # set +x 00:07:46.206 ************************************ 00:07:46.206 END TEST env 00:07:46.206 ************************************ 00:07:46.466 19:26:05 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:46.466 19:26:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.466 19:26:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.466 19:26:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.466 ************************************ 00:07:46.466 START TEST rpc 00:07:46.466 ************************************ 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:07:46.466 * Looking for test storage... 00:07:46.466 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.466 19:26:05 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.466 19:26:05 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.466 19:26:05 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.466 19:26:05 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.466 19:26:05 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.466 19:26:05 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.466 19:26:05 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.466 19:26:05 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.466 19:26:05 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.466 19:26:05 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.466 19:26:05 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.466 19:26:05 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:46.466 19:26:05 rpc -- scripts/common.sh@345 -- # : 1 00:07:46.466 19:26:05 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.466 19:26:05 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.466 19:26:05 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:46.466 19:26:05 rpc -- scripts/common.sh@353 -- # local d=1 00:07:46.466 19:26:05 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.466 19:26:05 rpc -- scripts/common.sh@355 -- # echo 1 00:07:46.466 19:26:05 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.466 19:26:05 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:46.466 19:26:05 rpc -- scripts/common.sh@353 -- # local d=2 00:07:46.466 19:26:05 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.466 19:26:05 rpc -- scripts/common.sh@355 -- # echo 2 00:07:46.466 19:26:05 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.466 19:26:05 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.466 19:26:05 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.466 19:26:05 rpc -- scripts/common.sh@368 -- # return 0 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.466 --rc genhtml_branch_coverage=1 00:07:46.466 --rc genhtml_function_coverage=1 00:07:46.466 --rc genhtml_legend=1 00:07:46.466 --rc geninfo_all_blocks=1 00:07:46.466 --rc geninfo_unexecuted_blocks=1 00:07:46.466 00:07:46.466 ' 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.466 --rc genhtml_branch_coverage=1 00:07:46.466 --rc genhtml_function_coverage=1 00:07:46.466 --rc genhtml_legend=1 00:07:46.466 --rc geninfo_all_blocks=1 00:07:46.466 --rc geninfo_unexecuted_blocks=1 00:07:46.466 00:07:46.466 ' 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.466 --rc genhtml_branch_coverage=1 00:07:46.466 --rc genhtml_function_coverage=1 00:07:46.466 --rc genhtml_legend=1 00:07:46.466 --rc geninfo_all_blocks=1 00:07:46.466 --rc geninfo_unexecuted_blocks=1 00:07:46.466 00:07:46.466 ' 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.466 --rc genhtml_branch_coverage=1 00:07:46.466 --rc genhtml_function_coverage=1 00:07:46.466 --rc genhtml_legend=1 00:07:46.466 --rc geninfo_all_blocks=1 00:07:46.466 --rc geninfo_unexecuted_blocks=1 00:07:46.466 00:07:46.466 ' 00:07:46.466 19:26:05 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57211 00:07:46.466 19:26:05 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:46.466 19:26:05 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57211 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@835 -- # '[' -z 57211 ']' 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.466 19:26:05 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:07:46.466 19:26:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.726 [2024-12-05 19:26:05.492862] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:07:46.726 [2024-12-05 19:26:05.493020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57211 ] 00:07:46.726 [2024-12-05 19:26:05.653125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.987 [2024-12-05 19:26:05.797251] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:46.987 [2024-12-05 19:26:05.797339] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57211' to capture a snapshot of events at runtime. 00:07:46.987 [2024-12-05 19:26:05.797355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:46.987 [2024-12-05 19:26:05.797370] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:46.987 [2024-12-05 19:26:05.797382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57211 for offline analysis/debug. 00:07:46.987 [2024-12-05 19:26:05.798631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.583 19:26:06 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.583 19:26:06 rpc -- common/autotest_common.sh@868 -- # return 0 00:07:47.583 19:26:06 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:47.583 19:26:06 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:07:47.583 19:26:06 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:47.583 19:26:06 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:47.583 19:26:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:47.583 19:26:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.583 19:26:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.583 ************************************ 00:07:47.583 START TEST rpc_integrity 00:07:47.583 ************************************ 00:07:47.583 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:47.583 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:47.583 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.583 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.583 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.583 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:47.844 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:47.844 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:47.844 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:47.844 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.844 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.844 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.844 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:47.844 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:47.844 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.844 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.844 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.844 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:47.844 { 00:07:47.844 "name": "Malloc0", 00:07:47.844 "aliases": [ 00:07:47.844 "25669724-a182-484b-9847-01331a97f922" 00:07:47.844 ], 00:07:47.844 "product_name": "Malloc disk", 00:07:47.844 "block_size": 512, 00:07:47.844 "num_blocks": 16384, 00:07:47.844 "uuid": "25669724-a182-484b-9847-01331a97f922", 00:07:47.844 "assigned_rate_limits": { 00:07:47.844 "rw_ios_per_sec": 0, 00:07:47.844 "rw_mbytes_per_sec": 0, 00:07:47.844 "r_mbytes_per_sec": 0, 00:07:47.844 "w_mbytes_per_sec": 0 00:07:47.844 }, 00:07:47.844 "claimed": false, 00:07:47.844 "zoned": false, 00:07:47.845 "supported_io_types": { 00:07:47.845 "read": true, 00:07:47.845 "write": true, 00:07:47.845 "unmap": true, 00:07:47.845 "flush": true, 00:07:47.845 "reset": true, 00:07:47.845 "nvme_admin": false, 00:07:47.845 "nvme_io": false, 00:07:47.845 "nvme_io_md": false, 00:07:47.845 "write_zeroes": true, 00:07:47.845 "zcopy": true, 00:07:47.845 "get_zone_info": false, 00:07:47.845 "zone_management": false, 00:07:47.845 "zone_append": false, 00:07:47.845 "compare": false, 00:07:47.845 "compare_and_write": false, 00:07:47.845 "abort": true, 00:07:47.845 "seek_hole": false, 00:07:47.845 "seek_data": false, 00:07:47.845 "copy": true, 00:07:47.845 "nvme_iov_md": false 00:07:47.845 }, 00:07:47.845 "memory_domains": [ 00:07:47.845 { 00:07:47.845 "dma_device_id": "system", 00:07:47.845 "dma_device_type": 1 00:07:47.845 }, 00:07:47.845 { 00:07:47.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.845 "dma_device_type": 2 00:07:47.845 } 00:07:47.845 ], 00:07:47.845 "driver_specific": {} 00:07:47.845 } 00:07:47.845 ]' 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.845 [2024-12-05 19:26:06.693220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:47.845 [2024-12-05 19:26:06.693331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.845 [2024-12-05 19:26:06.693376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:47.845 [2024-12-05 19:26:06.693397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.845 [2024-12-05 19:26:06.696072] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.845 [2024-12-05 19:26:06.696161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:47.845 Passthru0 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:47.845 { 00:07:47.845 "name": "Malloc0", 00:07:47.845 "aliases": [ 00:07:47.845 "25669724-a182-484b-9847-01331a97f922" 00:07:47.845 ], 00:07:47.845 "product_name": "Malloc disk", 00:07:47.845 "block_size": 512, 00:07:47.845 "num_blocks": 16384, 00:07:47.845 "uuid": "25669724-a182-484b-9847-01331a97f922", 00:07:47.845 "assigned_rate_limits": { 00:07:47.845 "rw_ios_per_sec": 0, 00:07:47.845 "rw_mbytes_per_sec": 0, 00:07:47.845 "r_mbytes_per_sec": 0, 00:07:47.845 "w_mbytes_per_sec": 0 00:07:47.845 }, 00:07:47.845 "claimed": true, 00:07:47.845 "claim_type": "exclusive_write", 00:07:47.845 "zoned": false, 00:07:47.845 "supported_io_types": { 00:07:47.845 "read": true, 00:07:47.845 "write": true, 00:07:47.845 "unmap": true, 00:07:47.845 "flush": true, 00:07:47.845 "reset": true, 00:07:47.845 "nvme_admin": false, 00:07:47.845 "nvme_io": false, 00:07:47.845 "nvme_io_md": false, 00:07:47.845 "write_zeroes": true, 00:07:47.845 "zcopy": true, 00:07:47.845 "get_zone_info": false, 00:07:47.845 "zone_management": false, 00:07:47.845 "zone_append": false, 00:07:47.845 "compare": false, 00:07:47.845 "compare_and_write": false, 00:07:47.845 "abort": true, 00:07:47.845 "seek_hole": false, 00:07:47.845 "seek_data": false, 00:07:47.845 "copy": true, 00:07:47.845 "nvme_iov_md": false 00:07:47.845 }, 00:07:47.845 "memory_domains": [ 00:07:47.845 { 00:07:47.845 "dma_device_id": "system", 00:07:47.845 "dma_device_type": 1 00:07:47.845 }, 00:07:47.845 { 00:07:47.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.845 "dma_device_type": 2 00:07:47.845 } 00:07:47.845 ], 00:07:47.845 "driver_specific": {} 00:07:47.845 }, 00:07:47.845 { 00:07:47.845 "name": "Passthru0", 00:07:47.845 "aliases": [ 00:07:47.845 "f956a684-6aa0-5db6-8f71-a449a3b8d2d7" 00:07:47.845 ], 00:07:47.845 "product_name": "passthru", 00:07:47.845 "block_size": 512, 00:07:47.845 "num_blocks": 16384, 00:07:47.845 "uuid": "f956a684-6aa0-5db6-8f71-a449a3b8d2d7", 00:07:47.845 "assigned_rate_limits": { 00:07:47.845 "rw_ios_per_sec": 0, 00:07:47.845 "rw_mbytes_per_sec": 0, 00:07:47.845 "r_mbytes_per_sec": 0, 00:07:47.845 "w_mbytes_per_sec": 0 00:07:47.845 }, 00:07:47.845 "claimed": false, 00:07:47.845 "zoned": false, 00:07:47.845 "supported_io_types": { 00:07:47.845 "read": true, 00:07:47.845 "write": true, 00:07:47.845 "unmap": true, 00:07:47.845 "flush": true, 00:07:47.845 "reset": true, 00:07:47.845 "nvme_admin": false, 00:07:47.845 "nvme_io": false, 00:07:47.845 "nvme_io_md": false, 00:07:47.845 "write_zeroes": true, 00:07:47.845 "zcopy": true, 00:07:47.845 "get_zone_info": false, 00:07:47.845 "zone_management": false, 00:07:47.845 "zone_append": false, 00:07:47.845 "compare": false, 00:07:47.845 "compare_and_write": false, 00:07:47.845 "abort": true, 00:07:47.845 "seek_hole": false, 00:07:47.845 "seek_data": false, 00:07:47.845 "copy": true, 00:07:47.845 "nvme_iov_md": false 00:07:47.845 }, 00:07:47.845 "memory_domains": [ 00:07:47.845 { 00:07:47.845 "dma_device_id": "system", 00:07:47.845 "dma_device_type": 1 00:07:47.845 }, 00:07:47.845 { 00:07:47.845 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.845 "dma_device_type": 2 00:07:47.845 } 00:07:47.845 ], 00:07:47.845 "driver_specific": { 00:07:47.845 "passthru": { 00:07:47.845 "name": "Passthru0", 00:07:47.845 "base_bdev_name": "Malloc0" 00:07:47.845 } 00:07:47.845 } 00:07:47.845 } 00:07:47.845 ]' 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:47.845 19:26:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:47.845 00:07:47.845 real 0m0.262s 00:07:47.845 user 0m0.130s 00:07:47.845 sys 0m0.034s 00:07:47.845 ************************************ 00:07:47.845 END TEST rpc_integrity 00:07:47.845 ************************************ 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.845 19:26:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.106 19:26:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:48.106 19:26:06 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.106 19:26:06 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.106 19:26:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.106 ************************************ 00:07:48.106 START TEST rpc_plugins 00:07:48.106 ************************************ 00:07:48.106 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:07:48.106 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:48.106 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.106 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.106 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.106 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:48.106 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:48.106 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.106 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.106 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.106 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:48.106 { 00:07:48.106 "name": "Malloc1", 00:07:48.106 "aliases": [ 00:07:48.106 "fe4167f9-3f7e-47fb-bc32-15c26f05223b" 00:07:48.106 ], 00:07:48.106 "product_name": "Malloc disk", 00:07:48.106 "block_size": 4096, 00:07:48.106 "num_blocks": 256, 00:07:48.106 "uuid": "fe4167f9-3f7e-47fb-bc32-15c26f05223b", 00:07:48.106 "assigned_rate_limits": { 00:07:48.106 "rw_ios_per_sec": 0, 00:07:48.106 "rw_mbytes_per_sec": 0, 00:07:48.106 "r_mbytes_per_sec": 0, 00:07:48.106 "w_mbytes_per_sec": 0 00:07:48.106 }, 00:07:48.106 "claimed": false, 00:07:48.106 "zoned": false, 00:07:48.106 "supported_io_types": { 00:07:48.106 "read": true, 00:07:48.106 "write": true, 00:07:48.106 "unmap": true, 00:07:48.106 "flush": true, 00:07:48.106 "reset": true, 00:07:48.106 "nvme_admin": false, 00:07:48.106 "nvme_io": false, 00:07:48.106 "nvme_io_md": false, 00:07:48.106 "write_zeroes": true, 00:07:48.106 "zcopy": true, 00:07:48.106 "get_zone_info": false, 00:07:48.106 "zone_management": false, 00:07:48.106 "zone_append": false, 00:07:48.106 "compare": false, 00:07:48.106 "compare_and_write": false, 00:07:48.106 "abort": true, 00:07:48.106 "seek_hole": false, 00:07:48.107 "seek_data": false, 00:07:48.107 "copy": true, 00:07:48.107 "nvme_iov_md": false 00:07:48.107 }, 00:07:48.107 "memory_domains": [ 00:07:48.107 { 00:07:48.107 "dma_device_id": "system", 00:07:48.107 "dma_device_type": 1 00:07:48.107 }, 00:07:48.107 { 00:07:48.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.107 "dma_device_type": 2 00:07:48.107 } 00:07:48.107 ], 00:07:48.107 "driver_specific": {} 00:07:48.107 } 00:07:48.107 ]' 00:07:48.107 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:48.107 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:48.107 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:48.107 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.107 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.107 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.107 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:48.107 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.107 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.107 19:26:06 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.107 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:48.107 19:26:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:48.107 ************************************ 00:07:48.107 END TEST rpc_plugins 00:07:48.107 ************************************ 00:07:48.107 19:26:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:48.107 00:07:48.107 real 0m0.125s 00:07:48.107 user 0m0.060s 00:07:48.107 sys 0m0.024s 00:07:48.107 19:26:07 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.107 19:26:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:48.107 19:26:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:48.107 19:26:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.107 19:26:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.107 19:26:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.107 ************************************ 00:07:48.107 START TEST rpc_trace_cmd_test 00:07:48.107 ************************************ 00:07:48.107 19:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:07:48.107 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:48.107 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:48.107 19:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.107 19:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:48.370 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57211", 00:07:48.370 "tpoint_group_mask": "0x8", 00:07:48.370 "iscsi_conn": { 00:07:48.370 "mask": "0x2", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "scsi": { 00:07:48.370 "mask": "0x4", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "bdev": { 00:07:48.370 "mask": "0x8", 00:07:48.370 "tpoint_mask": "0xffffffffffffffff" 00:07:48.370 }, 00:07:48.370 "nvmf_rdma": { 00:07:48.370 "mask": "0x10", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "nvmf_tcp": { 00:07:48.370 "mask": "0x20", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "ftl": { 00:07:48.370 "mask": "0x40", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "blobfs": { 00:07:48.370 "mask": "0x80", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "dsa": { 00:07:48.370 "mask": "0x200", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "thread": { 00:07:48.370 "mask": "0x400", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "nvme_pcie": { 00:07:48.370 "mask": "0x800", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "iaa": { 00:07:48.370 "mask": "0x1000", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "nvme_tcp": { 00:07:48.370 "mask": "0x2000", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "bdev_nvme": { 00:07:48.370 "mask": "0x4000", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "sock": { 00:07:48.370 "mask": "0x8000", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "blob": { 00:07:48.370 "mask": "0x10000", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "bdev_raid": { 00:07:48.370 "mask": "0x20000", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 }, 00:07:48.370 "scheduler": { 00:07:48.370 "mask": "0x40000", 00:07:48.370 "tpoint_mask": "0x0" 00:07:48.370 } 00:07:48.370 }' 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:48.370 00:07:48.370 real 0m0.167s 00:07:48.370 user 0m0.139s 00:07:48.370 sys 0m0.017s 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.370 ************************************ 00:07:48.370 END TEST rpc_trace_cmd_test 00:07:48.370 ************************************ 00:07:48.370 19:26:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.370 19:26:07 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:48.370 19:26:07 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:48.370 19:26:07 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:48.370 19:26:07 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.370 19:26:07 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.370 19:26:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:48.370 ************************************ 00:07:48.370 START TEST rpc_daemon_integrity 00:07:48.370 ************************************ 00:07:48.370 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:07:48.370 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:48.370 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.370 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.370 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.370 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:48.370 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:48.371 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:48.371 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:48.371 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.371 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.630 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.630 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:48.630 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:48.630 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.630 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.630 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.630 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:48.630 { 00:07:48.630 "name": "Malloc2", 00:07:48.630 "aliases": [ 00:07:48.630 "4c22ef08-7389-4fd0-8899-587c3597c39e" 00:07:48.630 ], 00:07:48.630 "product_name": "Malloc disk", 00:07:48.630 "block_size": 512, 00:07:48.630 "num_blocks": 16384, 00:07:48.630 "uuid": "4c22ef08-7389-4fd0-8899-587c3597c39e", 00:07:48.630 "assigned_rate_limits": { 00:07:48.630 "rw_ios_per_sec": 0, 00:07:48.630 "rw_mbytes_per_sec": 0, 00:07:48.630 "r_mbytes_per_sec": 0, 00:07:48.630 "w_mbytes_per_sec": 0 00:07:48.630 }, 00:07:48.630 "claimed": false, 00:07:48.630 "zoned": false, 00:07:48.630 "supported_io_types": { 00:07:48.630 "read": true, 00:07:48.630 "write": true, 00:07:48.630 "unmap": true, 00:07:48.630 "flush": true, 00:07:48.630 "reset": true, 00:07:48.630 "nvme_admin": false, 00:07:48.630 "nvme_io": false, 00:07:48.630 "nvme_io_md": false, 00:07:48.630 "write_zeroes": true, 00:07:48.630 "zcopy": true, 00:07:48.630 "get_zone_info": false, 00:07:48.630 "zone_management": false, 00:07:48.630 "zone_append": false, 00:07:48.630 "compare": false, 00:07:48.630 "compare_and_write": false, 00:07:48.630 "abort": true, 00:07:48.630 "seek_hole": false, 00:07:48.630 "seek_data": false, 00:07:48.630 "copy": true, 00:07:48.630 "nvme_iov_md": false 00:07:48.630 }, 00:07:48.630 "memory_domains": [ 00:07:48.630 { 00:07:48.630 "dma_device_id": "system", 00:07:48.630 "dma_device_type": 1 00:07:48.630 }, 00:07:48.631 { 00:07:48.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.631 "dma_device_type": 2 00:07:48.631 } 00:07:48.631 ], 00:07:48.631 "driver_specific": {} 00:07:48.631 } 00:07:48.631 ]' 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.631 [2024-12-05 19:26:07.440068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:48.631 [2024-12-05 19:26:07.440163] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.631 [2024-12-05 19:26:07.440189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:07:48.631 [2024-12-05 19:26:07.440203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.631 [2024-12-05 19:26:07.442778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.631 [2024-12-05 19:26:07.442839] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:48.631 Passthru0 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:48.631 { 00:07:48.631 "name": "Malloc2", 00:07:48.631 "aliases": [ 00:07:48.631 "4c22ef08-7389-4fd0-8899-587c3597c39e" 00:07:48.631 ], 00:07:48.631 "product_name": "Malloc disk", 00:07:48.631 "block_size": 512, 00:07:48.631 "num_blocks": 16384, 00:07:48.631 "uuid": "4c22ef08-7389-4fd0-8899-587c3597c39e", 00:07:48.631 "assigned_rate_limits": { 00:07:48.631 "rw_ios_per_sec": 0, 00:07:48.631 "rw_mbytes_per_sec": 0, 00:07:48.631 "r_mbytes_per_sec": 0, 00:07:48.631 "w_mbytes_per_sec": 0 00:07:48.631 }, 00:07:48.631 "claimed": true, 00:07:48.631 "claim_type": "exclusive_write", 00:07:48.631 "zoned": false, 00:07:48.631 "supported_io_types": { 00:07:48.631 "read": true, 00:07:48.631 "write": true, 00:07:48.631 "unmap": true, 00:07:48.631 "flush": true, 00:07:48.631 "reset": true, 00:07:48.631 "nvme_admin": false, 00:07:48.631 "nvme_io": false, 00:07:48.631 "nvme_io_md": false, 00:07:48.631 "write_zeroes": true, 00:07:48.631 "zcopy": true, 00:07:48.631 "get_zone_info": false, 00:07:48.631 "zone_management": false, 00:07:48.631 "zone_append": false, 00:07:48.631 "compare": false, 00:07:48.631 "compare_and_write": false, 00:07:48.631 "abort": true, 00:07:48.631 "seek_hole": false, 00:07:48.631 "seek_data": false, 00:07:48.631 "copy": true, 00:07:48.631 "nvme_iov_md": false 00:07:48.631 }, 00:07:48.631 "memory_domains": [ 00:07:48.631 { 00:07:48.631 "dma_device_id": "system", 00:07:48.631 "dma_device_type": 1 00:07:48.631 }, 00:07:48.631 { 00:07:48.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.631 "dma_device_type": 2 00:07:48.631 } 00:07:48.631 ], 00:07:48.631 "driver_specific": {} 00:07:48.631 }, 00:07:48.631 { 00:07:48.631 "name": "Passthru0", 00:07:48.631 "aliases": [ 00:07:48.631 "2e734fcf-e005-52cc-b9b2-cdfe16372d65" 00:07:48.631 ], 00:07:48.631 "product_name": "passthru", 00:07:48.631 "block_size": 512, 00:07:48.631 "num_blocks": 16384, 00:07:48.631 "uuid": "2e734fcf-e005-52cc-b9b2-cdfe16372d65", 00:07:48.631 "assigned_rate_limits": { 00:07:48.631 "rw_ios_per_sec": 0, 00:07:48.631 "rw_mbytes_per_sec": 0, 00:07:48.631 "r_mbytes_per_sec": 0, 00:07:48.631 "w_mbytes_per_sec": 0 00:07:48.631 }, 00:07:48.631 "claimed": false, 00:07:48.631 "zoned": false, 00:07:48.631 "supported_io_types": { 00:07:48.631 "read": true, 00:07:48.631 "write": true, 00:07:48.631 "unmap": true, 00:07:48.631 "flush": true, 00:07:48.631 "reset": true, 00:07:48.631 "nvme_admin": false, 00:07:48.631 "nvme_io": false, 00:07:48.631 "nvme_io_md": false, 00:07:48.631 "write_zeroes": true, 00:07:48.631 "zcopy": true, 00:07:48.631 "get_zone_info": false, 00:07:48.631 "zone_management": false, 00:07:48.631 "zone_append": false, 00:07:48.631 "compare": false, 00:07:48.631 "compare_and_write": false, 00:07:48.631 "abort": true, 00:07:48.631 "seek_hole": false, 00:07:48.631 "seek_data": false, 00:07:48.631 "copy": true, 00:07:48.631 "nvme_iov_md": false 00:07:48.631 }, 00:07:48.631 "memory_domains": [ 00:07:48.631 { 00:07:48.631 "dma_device_id": "system", 00:07:48.631 "dma_device_type": 1 00:07:48.631 }, 00:07:48.631 { 00:07:48.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.631 "dma_device_type": 2 00:07:48.631 } 00:07:48.631 ], 00:07:48.631 "driver_specific": { 00:07:48.631 "passthru": { 00:07:48.631 "name": "Passthru0", 00:07:48.631 "base_bdev_name": "Malloc2" 00:07:48.631 } 00:07:48.631 } 00:07:48.631 } 00:07:48.631 ]' 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:48.631 19:26:07 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:48.631 00:07:48.632 real 0m0.253s 00:07:48.632 user 0m0.131s 00:07:48.632 sys 0m0.028s 00:07:48.632 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.632 19:26:07 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:48.632 ************************************ 00:07:48.632 END TEST rpc_daemon_integrity 00:07:48.632 ************************************ 00:07:48.632 19:26:07 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:48.632 19:26:07 rpc -- rpc/rpc.sh@84 -- # killprocess 57211 00:07:48.632 19:26:07 rpc -- common/autotest_common.sh@954 -- # '[' -z 57211 ']' 00:07:48.632 19:26:07 rpc -- common/autotest_common.sh@958 -- # kill -0 57211 00:07:48.632 19:26:07 rpc -- common/autotest_common.sh@959 -- # uname 00:07:48.632 19:26:07 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.632 19:26:07 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57211 00:07:48.893 19:26:07 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.893 killing process with pid 57211 00:07:48.893 19:26:07 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.893 19:26:07 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57211' 00:07:48.893 19:26:07 rpc -- common/autotest_common.sh@973 -- # kill 57211 00:07:48.893 19:26:07 rpc -- common/autotest_common.sh@978 -- # wait 57211 00:07:50.810 00:07:50.810 real 0m4.161s 00:07:50.810 user 0m4.427s 00:07:50.810 sys 0m0.777s 00:07:50.810 19:26:09 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.810 ************************************ 00:07:50.810 END TEST rpc 00:07:50.810 ************************************ 00:07:50.810 19:26:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 19:26:09 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:50.810 19:26:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.810 19:26:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.810 19:26:09 -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 ************************************ 00:07:50.810 START TEST skip_rpc 00:07:50.810 ************************************ 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:07:50.810 * Looking for test storage... 00:07:50.810 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.810 19:26:09 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.810 --rc genhtml_branch_coverage=1 00:07:50.810 --rc genhtml_function_coverage=1 00:07:50.810 --rc genhtml_legend=1 00:07:50.810 --rc geninfo_all_blocks=1 00:07:50.810 --rc geninfo_unexecuted_blocks=1 00:07:50.810 00:07:50.810 ' 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.810 --rc genhtml_branch_coverage=1 00:07:50.810 --rc genhtml_function_coverage=1 00:07:50.810 --rc genhtml_legend=1 00:07:50.810 --rc geninfo_all_blocks=1 00:07:50.810 --rc geninfo_unexecuted_blocks=1 00:07:50.810 00:07:50.810 ' 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.810 --rc genhtml_branch_coverage=1 00:07:50.810 --rc genhtml_function_coverage=1 00:07:50.810 --rc genhtml_legend=1 00:07:50.810 --rc geninfo_all_blocks=1 00:07:50.810 --rc geninfo_unexecuted_blocks=1 00:07:50.810 00:07:50.810 ' 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.810 --rc genhtml_branch_coverage=1 00:07:50.810 --rc genhtml_function_coverage=1 00:07:50.810 --rc genhtml_legend=1 00:07:50.810 --rc geninfo_all_blocks=1 00:07:50.810 --rc geninfo_unexecuted_blocks=1 00:07:50.810 00:07:50.810 ' 00:07:50.810 19:26:09 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:50.810 19:26:09 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:07:50.810 19:26:09 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.810 19:26:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.810 ************************************ 00:07:50.810 START TEST skip_rpc 00:07:50.810 ************************************ 00:07:50.810 19:26:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:07:50.810 19:26:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57429 00:07:50.810 19:26:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.810 19:26:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:50.810 19:26:09 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:50.810 [2024-12-05 19:26:09.761779] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:07:50.810 [2024-12-05 19:26:09.761958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57429 ] 00:07:51.070 [2024-12-05 19:26:09.932246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.331 [2024-12-05 19:26:10.082752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57429 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57429 ']' 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57429 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57429 00:07:56.620 killing process with pid 57429 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57429' 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57429 00:07:56.620 19:26:14 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57429 00:07:57.566 00:07:57.566 real 0m6.698s 00:07:57.566 user 0m6.199s 00:07:57.566 sys 0m0.374s 00:07:57.566 19:26:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.566 19:26:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.566 ************************************ 00:07:57.566 END TEST skip_rpc 00:07:57.566 ************************************ 00:07:57.566 19:26:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:57.566 19:26:16 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.566 19:26:16 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.566 19:26:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.566 ************************************ 00:07:57.566 START TEST skip_rpc_with_json 00:07:57.566 ************************************ 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57528 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57528 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57528 ']' 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:57.566 19:26:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:57.566 [2024-12-05 19:26:16.528904] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:07:57.566 [2024-12-05 19:26:16.529061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57528 ] 00:07:57.829 [2024-12-05 19:26:16.688123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.829 [2024-12-05 19:26:16.828002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:58.832 [2024-12-05 19:26:17.586721] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:58.832 request: 00:07:58.832 { 00:07:58.832 "trtype": "tcp", 00:07:58.832 "method": "nvmf_get_transports", 00:07:58.832 "req_id": 1 00:07:58.832 } 00:07:58.832 Got JSON-RPC error response 00:07:58.832 response: 00:07:58.832 { 00:07:58.832 "code": -19, 00:07:58.832 "message": "No such device" 00:07:58.832 } 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:58.832 [2024-12-05 19:26:17.594812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.832 19:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:07:58.832 { 00:07:58.832 "subsystems": [ 00:07:58.832 { 00:07:58.832 "subsystem": "fsdev", 00:07:58.832 "config": [ 00:07:58.832 { 00:07:58.832 "method": "fsdev_set_opts", 00:07:58.832 "params": { 00:07:58.832 "fsdev_io_pool_size": 65535, 00:07:58.832 "fsdev_io_cache_size": 256 00:07:58.832 } 00:07:58.832 } 00:07:58.832 ] 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "subsystem": "keyring", 00:07:58.832 "config": [] 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "subsystem": "iobuf", 00:07:58.832 "config": [ 00:07:58.832 { 00:07:58.832 "method": "iobuf_set_options", 00:07:58.832 "params": { 00:07:58.832 "small_pool_count": 8192, 00:07:58.832 "large_pool_count": 1024, 00:07:58.832 "small_bufsize": 8192, 00:07:58.832 "large_bufsize": 135168, 00:07:58.832 "enable_numa": false 00:07:58.832 } 00:07:58.832 } 00:07:58.832 ] 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "subsystem": "sock", 00:07:58.832 "config": [ 00:07:58.832 { 00:07:58.832 "method": "sock_set_default_impl", 00:07:58.832 "params": { 00:07:58.832 "impl_name": "posix" 00:07:58.832 } 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "method": "sock_impl_set_options", 00:07:58.832 "params": { 00:07:58.832 "impl_name": "ssl", 00:07:58.832 "recv_buf_size": 4096, 00:07:58.832 "send_buf_size": 4096, 00:07:58.832 "enable_recv_pipe": true, 00:07:58.832 "enable_quickack": false, 00:07:58.832 "enable_placement_id": 0, 00:07:58.832 "enable_zerocopy_send_server": true, 00:07:58.832 "enable_zerocopy_send_client": false, 00:07:58.832 "zerocopy_threshold": 0, 00:07:58.832 "tls_version": 0, 00:07:58.832 "enable_ktls": false 00:07:58.832 } 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "method": "sock_impl_set_options", 00:07:58.832 "params": { 00:07:58.832 "impl_name": "posix", 00:07:58.832 "recv_buf_size": 2097152, 00:07:58.832 "send_buf_size": 2097152, 00:07:58.832 "enable_recv_pipe": true, 00:07:58.832 "enable_quickack": false, 00:07:58.832 "enable_placement_id": 0, 00:07:58.832 "enable_zerocopy_send_server": true, 00:07:58.832 "enable_zerocopy_send_client": false, 00:07:58.832 "zerocopy_threshold": 0, 00:07:58.832 "tls_version": 0, 00:07:58.832 "enable_ktls": false 00:07:58.832 } 00:07:58.832 } 00:07:58.832 ] 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "subsystem": "vmd", 00:07:58.832 "config": [] 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "subsystem": "accel", 00:07:58.832 "config": [ 00:07:58.832 { 00:07:58.832 "method": "accel_set_options", 00:07:58.832 "params": { 00:07:58.832 "small_cache_size": 128, 00:07:58.832 "large_cache_size": 16, 00:07:58.832 "task_count": 2048, 00:07:58.832 "sequence_count": 2048, 00:07:58.832 "buf_count": 2048 00:07:58.832 } 00:07:58.832 } 00:07:58.832 ] 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "subsystem": "bdev", 00:07:58.832 "config": [ 00:07:58.832 { 00:07:58.832 "method": "bdev_set_options", 00:07:58.832 "params": { 00:07:58.832 "bdev_io_pool_size": 65535, 00:07:58.832 "bdev_io_cache_size": 256, 00:07:58.832 "bdev_auto_examine": true, 00:07:58.832 "iobuf_small_cache_size": 128, 00:07:58.832 "iobuf_large_cache_size": 16 00:07:58.832 } 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "method": "bdev_raid_set_options", 00:07:58.832 "params": { 00:07:58.832 "process_window_size_kb": 1024, 00:07:58.832 "process_max_bandwidth_mb_sec": 0 00:07:58.832 } 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "method": "bdev_iscsi_set_options", 00:07:58.832 "params": { 00:07:58.832 "timeout_sec": 30 00:07:58.832 } 00:07:58.832 }, 00:07:58.832 { 00:07:58.832 "method": "bdev_nvme_set_options", 00:07:58.832 "params": { 00:07:58.832 "action_on_timeout": "none", 00:07:58.832 "timeout_us": 0, 00:07:58.832 "timeout_admin_us": 0, 00:07:58.832 "keep_alive_timeout_ms": 10000, 00:07:58.832 "arbitration_burst": 0, 00:07:58.832 "low_priority_weight": 0, 00:07:58.832 "medium_priority_weight": 0, 00:07:58.833 "high_priority_weight": 0, 00:07:58.833 "nvme_adminq_poll_period_us": 10000, 00:07:58.833 "nvme_ioq_poll_period_us": 0, 00:07:58.833 "io_queue_requests": 0, 00:07:58.833 "delay_cmd_submit": true, 00:07:58.833 "transport_retry_count": 4, 00:07:58.833 "bdev_retry_count": 3, 00:07:58.833 "transport_ack_timeout": 0, 00:07:58.833 "ctrlr_loss_timeout_sec": 0, 00:07:58.833 "reconnect_delay_sec": 0, 00:07:58.833 "fast_io_fail_timeout_sec": 0, 00:07:58.833 "disable_auto_failback": false, 00:07:58.833 "generate_uuids": false, 00:07:58.833 "transport_tos": 0, 00:07:58.833 "nvme_error_stat": false, 00:07:58.833 "rdma_srq_size": 0, 00:07:58.833 "io_path_stat": false, 00:07:58.833 "allow_accel_sequence": false, 00:07:58.833 "rdma_max_cq_size": 0, 00:07:58.833 "rdma_cm_event_timeout_ms": 0, 00:07:58.833 "dhchap_digests": [ 00:07:58.833 "sha256", 00:07:58.833 "sha384", 00:07:58.833 "sha512" 00:07:58.833 ], 00:07:58.833 "dhchap_dhgroups": [ 00:07:58.833 "null", 00:07:58.833 "ffdhe2048", 00:07:58.833 "ffdhe3072", 00:07:58.833 "ffdhe4096", 00:07:58.833 "ffdhe6144", 00:07:58.833 "ffdhe8192" 00:07:58.833 ] 00:07:58.833 } 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "method": "bdev_nvme_set_hotplug", 00:07:58.833 "params": { 00:07:58.833 "period_us": 100000, 00:07:58.833 "enable": false 00:07:58.833 } 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "method": "bdev_wait_for_examine" 00:07:58.833 } 00:07:58.833 ] 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "subsystem": "scsi", 00:07:58.833 "config": null 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "subsystem": "scheduler", 00:07:58.833 "config": [ 00:07:58.833 { 00:07:58.833 "method": "framework_set_scheduler", 00:07:58.833 "params": { 00:07:58.833 "name": "static" 00:07:58.833 } 00:07:58.833 } 00:07:58.833 ] 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "subsystem": "vhost_scsi", 00:07:58.833 "config": [] 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "subsystem": "vhost_blk", 00:07:58.833 "config": [] 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "subsystem": "ublk", 00:07:58.833 "config": [] 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "subsystem": "nbd", 00:07:58.833 "config": [] 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "subsystem": "nvmf", 00:07:58.833 "config": [ 00:07:58.833 { 00:07:58.833 "method": "nvmf_set_config", 00:07:58.833 "params": { 00:07:58.833 "discovery_filter": "match_any", 00:07:58.833 "admin_cmd_passthru": { 00:07:58.833 "identify_ctrlr": false 00:07:58.833 }, 00:07:58.833 "dhchap_digests": [ 00:07:58.833 "sha256", 00:07:58.833 "sha384", 00:07:58.833 "sha512" 00:07:58.833 ], 00:07:58.833 "dhchap_dhgroups": [ 00:07:58.833 "null", 00:07:58.833 "ffdhe2048", 00:07:58.833 "ffdhe3072", 00:07:58.833 "ffdhe4096", 00:07:58.833 "ffdhe6144", 00:07:58.833 "ffdhe8192" 00:07:58.833 ] 00:07:58.833 } 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "method": "nvmf_set_max_subsystems", 00:07:58.833 "params": { 00:07:58.833 "max_subsystems": 1024 00:07:58.833 } 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "method": "nvmf_set_crdt", 00:07:58.833 "params": { 00:07:58.833 "crdt1": 0, 00:07:58.833 "crdt2": 0, 00:07:58.833 "crdt3": 0 00:07:58.833 } 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "method": "nvmf_create_transport", 00:07:58.833 "params": { 00:07:58.833 "trtype": "TCP", 00:07:58.833 "max_queue_depth": 128, 00:07:58.833 "max_io_qpairs_per_ctrlr": 127, 00:07:58.833 "in_capsule_data_size": 4096, 00:07:58.833 "max_io_size": 131072, 00:07:58.833 "io_unit_size": 131072, 00:07:58.833 "max_aq_depth": 128, 00:07:58.833 "num_shared_buffers": 511, 00:07:58.833 "buf_cache_size": 4294967295, 00:07:58.833 "dif_insert_or_strip": false, 00:07:58.833 "zcopy": false, 00:07:58.833 "c2h_success": true, 00:07:58.833 "sock_priority": 0, 00:07:58.833 "abort_timeout_sec": 1, 00:07:58.833 "ack_timeout": 0, 00:07:58.833 "data_wr_pool_size": 0 00:07:58.833 } 00:07:58.833 } 00:07:58.833 ] 00:07:58.833 }, 00:07:58.833 { 00:07:58.833 "subsystem": "iscsi", 00:07:58.833 "config": [ 00:07:58.833 { 00:07:58.833 "method": "iscsi_set_options", 00:07:58.833 "params": { 00:07:58.833 "node_base": "iqn.2016-06.io.spdk", 00:07:58.833 "max_sessions": 128, 00:07:58.833 "max_connections_per_session": 2, 00:07:58.833 "max_queue_depth": 64, 00:07:58.833 "default_time2wait": 2, 00:07:58.833 "default_time2retain": 20, 00:07:58.833 "first_burst_length": 8192, 00:07:58.833 "immediate_data": true, 00:07:58.833 "allow_duplicated_isid": false, 00:07:58.833 "error_recovery_level": 0, 00:07:58.833 "nop_timeout": 60, 00:07:58.833 "nop_in_interval": 30, 00:07:58.833 "disable_chap": false, 00:07:58.833 "require_chap": false, 00:07:58.833 "mutual_chap": false, 00:07:58.833 "chap_group": 0, 00:07:58.833 "max_large_datain_per_connection": 64, 00:07:58.833 "max_r2t_per_connection": 4, 00:07:58.833 "pdu_pool_size": 36864, 00:07:58.833 "immediate_data_pool_size": 16384, 00:07:58.833 "data_out_pool_size": 2048 00:07:58.833 } 00:07:58.833 } 00:07:58.833 ] 00:07:58.833 } 00:07:58.833 ] 00:07:58.833 } 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57528 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57528 ']' 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57528 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57528 00:07:58.833 killing process with pid 57528 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57528' 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57528 00:07:58.833 19:26:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57528 00:08:00.746 19:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57573 00:08:00.746 19:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:00.746 19:26:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:06.038 19:26:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57573 00:08:06.038 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57573 ']' 00:08:06.038 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57573 00:08:06.038 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:08:06.038 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.039 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57573 00:08:06.039 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.039 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.039 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57573' 00:08:06.039 killing process with pid 57573 00:08:06.039 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57573 00:08:06.039 19:26:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57573 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:08:07.419 00:08:07.419 real 0m9.762s 00:08:07.419 user 0m9.095s 00:08:07.419 sys 0m0.897s 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.419 ************************************ 00:08:07.419 END TEST skip_rpc_with_json 00:08:07.419 ************************************ 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:07.419 19:26:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:07.419 19:26:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.419 19:26:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.419 19:26:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.419 ************************************ 00:08:07.419 START TEST skip_rpc_with_delay 00:08:07.419 ************************************ 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:07.419 [2024-12-05 19:26:26.336051] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:08:07.419 ************************************ 00:08:07.419 END TEST skip_rpc_with_delay 00:08:07.419 ************************************ 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:07.419 00:08:07.419 real 0m0.128s 00:08:07.419 user 0m0.065s 00:08:07.419 sys 0m0.062s 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:07.419 19:26:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:07.680 19:26:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:07.680 19:26:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:07.680 19:26:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:07.680 19:26:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:07.680 19:26:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:07.680 19:26:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:07.680 ************************************ 00:08:07.680 START TEST exit_on_failed_rpc_init 00:08:07.680 ************************************ 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57702 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57702 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57702 ']' 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:07.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:07.680 19:26:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:07.680 [2024-12-05 19:26:26.537914] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:07.680 [2024-12-05 19:26:26.538485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57702 ] 00:08:07.941 [2024-12-05 19:26:26.696257] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.941 [2024-12-05 19:26:26.826365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:08:08.887 19:26:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:08:08.887 [2024-12-05 19:26:27.627482] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:08.887 [2024-12-05 19:26:27.627622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57720 ] 00:08:08.887 [2024-12-05 19:26:27.789469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.238 [2024-12-05 19:26:27.904982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.238 [2024-12-05 19:26:27.905079] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:09.238 [2024-12-05 19:26:27.905094] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:09.238 [2024-12-05 19:26:27.905110] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57702 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57702 ']' 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57702 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57702 00:08:09.238 killing process with pid 57702 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57702' 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57702 00:08:09.238 19:26:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57702 00:08:11.153 ************************************ 00:08:11.153 END TEST exit_on_failed_rpc_init 00:08:11.153 ************************************ 00:08:11.153 00:08:11.153 real 0m3.317s 00:08:11.153 user 0m3.561s 00:08:11.153 sys 0m0.545s 00:08:11.153 19:26:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.153 19:26:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:11.153 19:26:29 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:08:11.153 00:08:11.153 real 0m20.340s 00:08:11.153 user 0m19.067s 00:08:11.153 sys 0m2.087s 00:08:11.153 19:26:29 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.153 19:26:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:11.153 ************************************ 00:08:11.153 END TEST skip_rpc 00:08:11.153 ************************************ 00:08:11.153 19:26:29 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:11.153 19:26:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.153 19:26:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.153 19:26:29 -- common/autotest_common.sh@10 -- # set +x 00:08:11.153 ************************************ 00:08:11.153 START TEST rpc_client 00:08:11.153 ************************************ 00:08:11.153 19:26:29 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:08:11.153 * Looking for test storage... 00:08:11.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:08:11.153 19:26:29 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.153 19:26:29 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.153 19:26:29 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.153 19:26:30 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.153 19:26:30 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.154 19:26:30 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:11.154 19:26:30 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.154 19:26:30 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.154 --rc genhtml_branch_coverage=1 00:08:11.154 --rc genhtml_function_coverage=1 00:08:11.154 --rc genhtml_legend=1 00:08:11.154 --rc geninfo_all_blocks=1 00:08:11.154 --rc geninfo_unexecuted_blocks=1 00:08:11.154 00:08:11.154 ' 00:08:11.154 19:26:30 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.154 --rc genhtml_branch_coverage=1 00:08:11.154 --rc genhtml_function_coverage=1 00:08:11.154 --rc genhtml_legend=1 00:08:11.154 --rc geninfo_all_blocks=1 00:08:11.154 --rc geninfo_unexecuted_blocks=1 00:08:11.154 00:08:11.154 ' 00:08:11.154 19:26:30 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.154 --rc genhtml_branch_coverage=1 00:08:11.154 --rc genhtml_function_coverage=1 00:08:11.154 --rc genhtml_legend=1 00:08:11.154 --rc geninfo_all_blocks=1 00:08:11.154 --rc geninfo_unexecuted_blocks=1 00:08:11.154 00:08:11.154 ' 00:08:11.154 19:26:30 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.154 --rc genhtml_branch_coverage=1 00:08:11.154 --rc genhtml_function_coverage=1 00:08:11.154 --rc genhtml_legend=1 00:08:11.154 --rc geninfo_all_blocks=1 00:08:11.154 --rc geninfo_unexecuted_blocks=1 00:08:11.154 00:08:11.154 ' 00:08:11.154 19:26:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:08:11.154 OK 00:08:11.154 19:26:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:11.154 00:08:11.154 real 0m0.231s 00:08:11.154 user 0m0.116s 00:08:11.154 sys 0m0.104s 00:08:11.154 19:26:30 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.154 19:26:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:11.154 ************************************ 00:08:11.154 END TEST rpc_client 00:08:11.154 ************************************ 00:08:11.154 19:26:30 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:11.154 19:26:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.154 19:26:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.154 19:26:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.414 ************************************ 00:08:11.414 START TEST json_config 00:08:11.414 ************************************ 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.414 19:26:30 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.414 19:26:30 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.414 19:26:30 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.414 19:26:30 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.414 19:26:30 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.414 19:26:30 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.414 19:26:30 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.414 19:26:30 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.414 19:26:30 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.414 19:26:30 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.414 19:26:30 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.414 19:26:30 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:11.414 19:26:30 json_config -- scripts/common.sh@345 -- # : 1 00:08:11.414 19:26:30 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.414 19:26:30 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.414 19:26:30 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:11.414 19:26:30 json_config -- scripts/common.sh@353 -- # local d=1 00:08:11.414 19:26:30 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.414 19:26:30 json_config -- scripts/common.sh@355 -- # echo 1 00:08:11.414 19:26:30 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.414 19:26:30 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:11.414 19:26:30 json_config -- scripts/common.sh@353 -- # local d=2 00:08:11.414 19:26:30 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.414 19:26:30 json_config -- scripts/common.sh@355 -- # echo 2 00:08:11.414 19:26:30 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.414 19:26:30 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.414 19:26:30 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.414 19:26:30 json_config -- scripts/common.sh@368 -- # return 0 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.414 --rc genhtml_branch_coverage=1 00:08:11.414 --rc genhtml_function_coverage=1 00:08:11.414 --rc genhtml_legend=1 00:08:11.414 --rc geninfo_all_blocks=1 00:08:11.414 --rc geninfo_unexecuted_blocks=1 00:08:11.414 00:08:11.414 ' 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.414 --rc genhtml_branch_coverage=1 00:08:11.414 --rc genhtml_function_coverage=1 00:08:11.414 --rc genhtml_legend=1 00:08:11.414 --rc geninfo_all_blocks=1 00:08:11.414 --rc geninfo_unexecuted_blocks=1 00:08:11.414 00:08:11.414 ' 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.414 --rc genhtml_branch_coverage=1 00:08:11.414 --rc genhtml_function_coverage=1 00:08:11.414 --rc genhtml_legend=1 00:08:11.414 --rc geninfo_all_blocks=1 00:08:11.414 --rc geninfo_unexecuted_blocks=1 00:08:11.414 00:08:11.414 ' 00:08:11.414 19:26:30 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.414 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.414 --rc genhtml_branch_coverage=1 00:08:11.414 --rc genhtml_function_coverage=1 00:08:11.414 --rc genhtml_legend=1 00:08:11.414 --rc geninfo_all_blocks=1 00:08:11.414 --rc geninfo_unexecuted_blocks=1 00:08:11.414 00:08:11.414 ' 00:08:11.415 19:26:30 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:effe6007-2875-4676-b590-7e2fb497993d 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=effe6007-2875-4676-b590-7e2fb497993d 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.415 19:26:30 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.415 19:26:30 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.415 19:26:30 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.415 19:26:30 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.415 19:26:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.415 19:26:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.415 19:26:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.415 19:26:30 json_config -- paths/export.sh@5 -- # export PATH 00:08:11.415 19:26:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@51 -- # : 0 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.415 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.415 19:26:30 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.415 19:26:30 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:11.415 19:26:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:11.415 19:26:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:11.415 19:26:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:11.415 19:26:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:11.415 19:26:30 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:08:11.415 WARNING: No tests are enabled so not running JSON configuration tests 00:08:11.415 19:26:30 json_config -- json_config/json_config.sh@28 -- # exit 0 00:08:11.415 00:08:11.415 real 0m0.154s 00:08:11.415 user 0m0.090s 00:08:11.415 sys 0m0.066s 00:08:11.415 19:26:30 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.415 ************************************ 00:08:11.415 19:26:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:11.415 END TEST json_config 00:08:11.415 ************************************ 00:08:11.415 19:26:30 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:11.415 19:26:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.415 19:26:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.415 19:26:30 -- common/autotest_common.sh@10 -- # set +x 00:08:11.415 ************************************ 00:08:11.415 START TEST json_config_extra_key 00:08:11.415 ************************************ 00:08:11.415 19:26:30 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:08:11.675 19:26:30 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.675 19:26:30 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.675 19:26:30 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.675 19:26:30 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:11.675 19:26:30 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.675 19:26:30 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.675 --rc genhtml_branch_coverage=1 00:08:11.675 --rc genhtml_function_coverage=1 00:08:11.675 --rc genhtml_legend=1 00:08:11.675 --rc geninfo_all_blocks=1 00:08:11.675 --rc geninfo_unexecuted_blocks=1 00:08:11.675 00:08:11.675 ' 00:08:11.675 19:26:30 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.675 --rc genhtml_branch_coverage=1 00:08:11.675 --rc genhtml_function_coverage=1 00:08:11.675 --rc genhtml_legend=1 00:08:11.675 --rc geninfo_all_blocks=1 00:08:11.675 --rc geninfo_unexecuted_blocks=1 00:08:11.675 00:08:11.675 ' 00:08:11.675 19:26:30 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.675 --rc genhtml_branch_coverage=1 00:08:11.675 --rc genhtml_function_coverage=1 00:08:11.675 --rc genhtml_legend=1 00:08:11.675 --rc geninfo_all_blocks=1 00:08:11.675 --rc geninfo_unexecuted_blocks=1 00:08:11.675 00:08:11.675 ' 00:08:11.675 19:26:30 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.675 --rc genhtml_branch_coverage=1 00:08:11.675 --rc genhtml_function_coverage=1 00:08:11.675 --rc genhtml_legend=1 00:08:11.675 --rc geninfo_all_blocks=1 00:08:11.675 --rc geninfo_unexecuted_blocks=1 00:08:11.675 00:08:11.675 ' 00:08:11.675 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:effe6007-2875-4676-b590-7e2fb497993d 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=effe6007-2875-4676-b590-7e2fb497993d 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.675 19:26:30 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.675 19:26:30 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.675 19:26:30 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.675 19:26:30 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.676 19:26:30 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.676 19:26:30 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:11.676 19:26:30 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.676 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.676 19:26:30 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:11.676 INFO: launching applications... 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:11.676 19:26:30 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57919 00:08:11.676 Waiting for target to run... 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57919 /var/tmp/spdk_tgt.sock 00:08:11.676 19:26:30 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57919 ']' 00:08:11.676 19:26:30 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:11.676 19:26:30 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.676 19:26:30 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:11.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:11.676 19:26:30 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:08:11.676 19:26:30 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.676 19:26:30 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:11.676 [2024-12-05 19:26:30.639644] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:11.676 [2024-12-05 19:26:30.640016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57919 ] 00:08:12.245 [2024-12-05 19:26:31.086017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.245 [2024-12-05 19:26:31.196729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.817 00:08:12.817 INFO: shutting down applications... 00:08:12.817 19:26:31 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.817 19:26:31 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:08:12.817 19:26:31 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:12.817 19:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:12.817 19:26:31 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:12.817 19:26:31 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:12.817 19:26:31 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:12.817 19:26:31 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57919 ]] 00:08:12.817 19:26:31 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57919 00:08:12.817 19:26:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:12.817 19:26:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:12.817 19:26:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57919 00:08:12.817 19:26:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:13.414 19:26:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:13.414 19:26:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:13.414 19:26:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57919 00:08:13.414 19:26:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:13.985 19:26:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:13.985 19:26:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:13.985 19:26:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57919 00:08:13.985 19:26:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:14.557 19:26:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:14.557 19:26:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:14.557 19:26:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57919 00:08:14.557 19:26:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:14.818 19:26:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:14.818 19:26:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:14.818 19:26:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57919 00:08:14.818 19:26:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:14.818 19:26:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:14.818 SPDK target shutdown done 00:08:14.818 Success 00:08:14.818 19:26:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:14.818 19:26:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:14.818 19:26:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:14.818 ************************************ 00:08:14.818 END TEST json_config_extra_key 00:08:14.818 ************************************ 00:08:14.818 00:08:14.818 real 0m3.389s 00:08:14.818 user 0m3.062s 00:08:14.818 sys 0m0.560s 00:08:14.818 19:26:33 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.818 19:26:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:15.080 19:26:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:15.080 19:26:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.080 19:26:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.080 19:26:33 -- common/autotest_common.sh@10 -- # set +x 00:08:15.080 ************************************ 00:08:15.080 START TEST alias_rpc 00:08:15.080 ************************************ 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:15.080 * Looking for test storage... 00:08:15.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:15.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:15.080 19:26:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:15.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.080 --rc genhtml_branch_coverage=1 00:08:15.080 --rc genhtml_function_coverage=1 00:08:15.080 --rc genhtml_legend=1 00:08:15.080 --rc geninfo_all_blocks=1 00:08:15.080 --rc geninfo_unexecuted_blocks=1 00:08:15.080 00:08:15.080 ' 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:15.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.080 --rc genhtml_branch_coverage=1 00:08:15.080 --rc genhtml_function_coverage=1 00:08:15.080 --rc genhtml_legend=1 00:08:15.080 --rc geninfo_all_blocks=1 00:08:15.080 --rc geninfo_unexecuted_blocks=1 00:08:15.080 00:08:15.080 ' 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:15.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.080 --rc genhtml_branch_coverage=1 00:08:15.080 --rc genhtml_function_coverage=1 00:08:15.080 --rc genhtml_legend=1 00:08:15.080 --rc geninfo_all_blocks=1 00:08:15.080 --rc geninfo_unexecuted_blocks=1 00:08:15.080 00:08:15.080 ' 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:15.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:15.080 --rc genhtml_branch_coverage=1 00:08:15.080 --rc genhtml_function_coverage=1 00:08:15.080 --rc genhtml_legend=1 00:08:15.080 --rc geninfo_all_blocks=1 00:08:15.080 --rc geninfo_unexecuted_blocks=1 00:08:15.080 00:08:15.080 ' 00:08:15.080 19:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:15.080 19:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58012 00:08:15.080 19:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58012 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58012 ']' 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.080 19:26:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.080 19:26:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:15.340 [2024-12-05 19:26:34.090781] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:15.340 [2024-12-05 19:26:34.091170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58012 ] 00:08:15.340 [2024-12-05 19:26:34.252180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.600 [2024-12-05 19:26:34.391910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.215 19:26:35 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.215 19:26:35 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:16.215 19:26:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:08:16.475 19:26:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58012 00:08:16.475 19:26:35 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58012 ']' 00:08:16.475 19:26:35 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58012 00:08:16.475 19:26:35 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:08:16.475 19:26:35 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:16.475 19:26:35 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58012 00:08:16.475 killing process with pid 58012 00:08:16.476 19:26:35 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:16.476 19:26:35 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:16.476 19:26:35 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58012' 00:08:16.476 19:26:35 alias_rpc -- common/autotest_common.sh@973 -- # kill 58012 00:08:16.476 19:26:35 alias_rpc -- common/autotest_common.sh@978 -- # wait 58012 00:08:18.391 ************************************ 00:08:18.391 END TEST alias_rpc 00:08:18.391 ************************************ 00:08:18.391 00:08:18.391 real 0m3.489s 00:08:18.391 user 0m3.462s 00:08:18.391 sys 0m0.583s 00:08:18.391 19:26:37 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:18.391 19:26:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:18.391 19:26:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:18.391 19:26:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:18.391 19:26:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:18.391 19:26:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.391 19:26:37 -- common/autotest_common.sh@10 -- # set +x 00:08:18.653 ************************************ 00:08:18.653 START TEST spdkcli_tcp 00:08:18.653 ************************************ 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:08:18.653 * Looking for test storage... 00:08:18.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.653 19:26:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:18.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.653 --rc genhtml_branch_coverage=1 00:08:18.653 --rc genhtml_function_coverage=1 00:08:18.653 --rc genhtml_legend=1 00:08:18.653 --rc geninfo_all_blocks=1 00:08:18.653 --rc geninfo_unexecuted_blocks=1 00:08:18.653 00:08:18.653 ' 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:18.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.653 --rc genhtml_branch_coverage=1 00:08:18.653 --rc genhtml_function_coverage=1 00:08:18.653 --rc genhtml_legend=1 00:08:18.653 --rc geninfo_all_blocks=1 00:08:18.653 --rc geninfo_unexecuted_blocks=1 00:08:18.653 00:08:18.653 ' 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:18.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.653 --rc genhtml_branch_coverage=1 00:08:18.653 --rc genhtml_function_coverage=1 00:08:18.653 --rc genhtml_legend=1 00:08:18.653 --rc geninfo_all_blocks=1 00:08:18.653 --rc geninfo_unexecuted_blocks=1 00:08:18.653 00:08:18.653 ' 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:18.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.653 --rc genhtml_branch_coverage=1 00:08:18.653 --rc genhtml_function_coverage=1 00:08:18.653 --rc genhtml_legend=1 00:08:18.653 --rc geninfo_all_blocks=1 00:08:18.653 --rc geninfo_unexecuted_blocks=1 00:08:18.653 00:08:18.653 ' 00:08:18.653 19:26:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:08:18.653 19:26:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:08:18.653 19:26:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:08:18.653 19:26:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:18.653 19:26:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:18.653 19:26:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:18.653 19:26:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.653 19:26:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.653 19:26:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58119 00:08:18.654 19:26:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58119 00:08:18.654 19:26:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:18.654 19:26:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58119 ']' 00:08:18.654 19:26:37 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.654 19:26:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.654 19:26:37 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.654 19:26:37 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.654 19:26:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:18.914 [2024-12-05 19:26:37.699727] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:18.914 [2024-12-05 19:26:37.700914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58119 ] 00:08:18.914 [2024-12-05 19:26:37.887255] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:19.174 [2024-12-05 19:26:38.058265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.174 [2024-12-05 19:26:38.058277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.128 19:26:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.128 19:26:38 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:08:20.128 19:26:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58136 00:08:20.128 19:26:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:20.128 19:26:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:20.128 [ 00:08:20.128 "bdev_malloc_delete", 00:08:20.128 "bdev_malloc_create", 00:08:20.128 "bdev_null_resize", 00:08:20.128 "bdev_null_delete", 00:08:20.128 "bdev_null_create", 00:08:20.128 "bdev_nvme_cuse_unregister", 00:08:20.128 "bdev_nvme_cuse_register", 00:08:20.128 "bdev_opal_new_user", 00:08:20.128 "bdev_opal_set_lock_state", 00:08:20.128 "bdev_opal_delete", 00:08:20.128 "bdev_opal_get_info", 00:08:20.128 "bdev_opal_create", 00:08:20.128 "bdev_nvme_opal_revert", 00:08:20.128 "bdev_nvme_opal_init", 00:08:20.128 "bdev_nvme_send_cmd", 00:08:20.128 "bdev_nvme_set_keys", 00:08:20.128 "bdev_nvme_get_path_iostat", 00:08:20.128 "bdev_nvme_get_mdns_discovery_info", 00:08:20.128 "bdev_nvme_stop_mdns_discovery", 00:08:20.128 "bdev_nvme_start_mdns_discovery", 00:08:20.128 "bdev_nvme_set_multipath_policy", 00:08:20.128 "bdev_nvme_set_preferred_path", 00:08:20.128 "bdev_nvme_get_io_paths", 00:08:20.128 "bdev_nvme_remove_error_injection", 00:08:20.128 "bdev_nvme_add_error_injection", 00:08:20.128 "bdev_nvme_get_discovery_info", 00:08:20.128 "bdev_nvme_stop_discovery", 00:08:20.128 "bdev_nvme_start_discovery", 00:08:20.128 "bdev_nvme_get_controller_health_info", 00:08:20.128 "bdev_nvme_disable_controller", 00:08:20.128 "bdev_nvme_enable_controller", 00:08:20.128 "bdev_nvme_reset_controller", 00:08:20.128 "bdev_nvme_get_transport_statistics", 00:08:20.128 "bdev_nvme_apply_firmware", 00:08:20.128 "bdev_nvme_detach_controller", 00:08:20.128 "bdev_nvme_get_controllers", 00:08:20.128 "bdev_nvme_attach_controller", 00:08:20.128 "bdev_nvme_set_hotplug", 00:08:20.128 "bdev_nvme_set_options", 00:08:20.128 "bdev_passthru_delete", 00:08:20.128 "bdev_passthru_create", 00:08:20.128 "bdev_lvol_set_parent_bdev", 00:08:20.128 "bdev_lvol_set_parent", 00:08:20.128 "bdev_lvol_check_shallow_copy", 00:08:20.128 "bdev_lvol_start_shallow_copy", 00:08:20.128 "bdev_lvol_grow_lvstore", 00:08:20.128 "bdev_lvol_get_lvols", 00:08:20.128 "bdev_lvol_get_lvstores", 00:08:20.128 "bdev_lvol_delete", 00:08:20.128 "bdev_lvol_set_read_only", 00:08:20.128 "bdev_lvol_resize", 00:08:20.128 "bdev_lvol_decouple_parent", 00:08:20.128 "bdev_lvol_inflate", 00:08:20.128 "bdev_lvol_rename", 00:08:20.128 "bdev_lvol_clone_bdev", 00:08:20.128 "bdev_lvol_clone", 00:08:20.128 "bdev_lvol_snapshot", 00:08:20.128 "bdev_lvol_create", 00:08:20.128 "bdev_lvol_delete_lvstore", 00:08:20.128 "bdev_lvol_rename_lvstore", 00:08:20.128 "bdev_lvol_create_lvstore", 00:08:20.128 "bdev_raid_set_options", 00:08:20.128 "bdev_raid_remove_base_bdev", 00:08:20.128 "bdev_raid_add_base_bdev", 00:08:20.128 "bdev_raid_delete", 00:08:20.128 "bdev_raid_create", 00:08:20.128 "bdev_raid_get_bdevs", 00:08:20.128 "bdev_error_inject_error", 00:08:20.128 "bdev_error_delete", 00:08:20.128 "bdev_error_create", 00:08:20.128 "bdev_split_delete", 00:08:20.128 "bdev_split_create", 00:08:20.128 "bdev_delay_delete", 00:08:20.128 "bdev_delay_create", 00:08:20.128 "bdev_delay_update_latency", 00:08:20.128 "bdev_zone_block_delete", 00:08:20.128 "bdev_zone_block_create", 00:08:20.128 "blobfs_create", 00:08:20.128 "blobfs_detect", 00:08:20.128 "blobfs_set_cache_size", 00:08:20.128 "bdev_xnvme_delete", 00:08:20.128 "bdev_xnvme_create", 00:08:20.128 "bdev_aio_delete", 00:08:20.128 "bdev_aio_rescan", 00:08:20.128 "bdev_aio_create", 00:08:20.128 "bdev_ftl_set_property", 00:08:20.128 "bdev_ftl_get_properties", 00:08:20.128 "bdev_ftl_get_stats", 00:08:20.128 "bdev_ftl_unmap", 00:08:20.128 "bdev_ftl_unload", 00:08:20.128 "bdev_ftl_delete", 00:08:20.128 "bdev_ftl_load", 00:08:20.128 "bdev_ftl_create", 00:08:20.128 "bdev_virtio_attach_controller", 00:08:20.128 "bdev_virtio_scsi_get_devices", 00:08:20.128 "bdev_virtio_detach_controller", 00:08:20.128 "bdev_virtio_blk_set_hotplug", 00:08:20.128 "bdev_iscsi_delete", 00:08:20.128 "bdev_iscsi_create", 00:08:20.128 "bdev_iscsi_set_options", 00:08:20.128 "accel_error_inject_error", 00:08:20.128 "ioat_scan_accel_module", 00:08:20.128 "dsa_scan_accel_module", 00:08:20.128 "iaa_scan_accel_module", 00:08:20.128 "keyring_file_remove_key", 00:08:20.128 "keyring_file_add_key", 00:08:20.128 "keyring_linux_set_options", 00:08:20.128 "fsdev_aio_delete", 00:08:20.128 "fsdev_aio_create", 00:08:20.128 "iscsi_get_histogram", 00:08:20.128 "iscsi_enable_histogram", 00:08:20.128 "iscsi_set_options", 00:08:20.128 "iscsi_get_auth_groups", 00:08:20.128 "iscsi_auth_group_remove_secret", 00:08:20.128 "iscsi_auth_group_add_secret", 00:08:20.128 "iscsi_delete_auth_group", 00:08:20.128 "iscsi_create_auth_group", 00:08:20.128 "iscsi_set_discovery_auth", 00:08:20.128 "iscsi_get_options", 00:08:20.128 "iscsi_target_node_request_logout", 00:08:20.128 "iscsi_target_node_set_redirect", 00:08:20.128 "iscsi_target_node_set_auth", 00:08:20.128 "iscsi_target_node_add_lun", 00:08:20.128 "iscsi_get_stats", 00:08:20.128 "iscsi_get_connections", 00:08:20.128 "iscsi_portal_group_set_auth", 00:08:20.128 "iscsi_start_portal_group", 00:08:20.128 "iscsi_delete_portal_group", 00:08:20.128 "iscsi_create_portal_group", 00:08:20.128 "iscsi_get_portal_groups", 00:08:20.128 "iscsi_delete_target_node", 00:08:20.128 "iscsi_target_node_remove_pg_ig_maps", 00:08:20.128 "iscsi_target_node_add_pg_ig_maps", 00:08:20.128 "iscsi_create_target_node", 00:08:20.128 "iscsi_get_target_nodes", 00:08:20.128 "iscsi_delete_initiator_group", 00:08:20.128 "iscsi_initiator_group_remove_initiators", 00:08:20.128 "iscsi_initiator_group_add_initiators", 00:08:20.128 "iscsi_create_initiator_group", 00:08:20.128 "iscsi_get_initiator_groups", 00:08:20.128 "nvmf_set_crdt", 00:08:20.128 "nvmf_set_config", 00:08:20.128 "nvmf_set_max_subsystems", 00:08:20.128 "nvmf_stop_mdns_prr", 00:08:20.128 "nvmf_publish_mdns_prr", 00:08:20.128 "nvmf_subsystem_get_listeners", 00:08:20.128 "nvmf_subsystem_get_qpairs", 00:08:20.128 "nvmf_subsystem_get_controllers", 00:08:20.128 "nvmf_get_stats", 00:08:20.128 "nvmf_get_transports", 00:08:20.128 "nvmf_create_transport", 00:08:20.128 "nvmf_get_targets", 00:08:20.128 "nvmf_delete_target", 00:08:20.128 "nvmf_create_target", 00:08:20.128 "nvmf_subsystem_allow_any_host", 00:08:20.128 "nvmf_subsystem_set_keys", 00:08:20.128 "nvmf_subsystem_remove_host", 00:08:20.128 "nvmf_subsystem_add_host", 00:08:20.128 "nvmf_ns_remove_host", 00:08:20.128 "nvmf_ns_add_host", 00:08:20.128 "nvmf_subsystem_remove_ns", 00:08:20.128 "nvmf_subsystem_set_ns_ana_group", 00:08:20.128 "nvmf_subsystem_add_ns", 00:08:20.129 "nvmf_subsystem_listener_set_ana_state", 00:08:20.129 "nvmf_discovery_get_referrals", 00:08:20.129 "nvmf_discovery_remove_referral", 00:08:20.129 "nvmf_discovery_add_referral", 00:08:20.129 "nvmf_subsystem_remove_listener", 00:08:20.129 "nvmf_subsystem_add_listener", 00:08:20.129 "nvmf_delete_subsystem", 00:08:20.129 "nvmf_create_subsystem", 00:08:20.129 "nvmf_get_subsystems", 00:08:20.129 "env_dpdk_get_mem_stats", 00:08:20.129 "nbd_get_disks", 00:08:20.129 "nbd_stop_disk", 00:08:20.129 "nbd_start_disk", 00:08:20.129 "ublk_recover_disk", 00:08:20.129 "ublk_get_disks", 00:08:20.129 "ublk_stop_disk", 00:08:20.129 "ublk_start_disk", 00:08:20.129 "ublk_destroy_target", 00:08:20.129 "ublk_create_target", 00:08:20.129 "virtio_blk_create_transport", 00:08:20.129 "virtio_blk_get_transports", 00:08:20.129 "vhost_controller_set_coalescing", 00:08:20.129 "vhost_get_controllers", 00:08:20.129 "vhost_delete_controller", 00:08:20.129 "vhost_create_blk_controller", 00:08:20.129 "vhost_scsi_controller_remove_target", 00:08:20.129 "vhost_scsi_controller_add_target", 00:08:20.129 "vhost_start_scsi_controller", 00:08:20.129 "vhost_create_scsi_controller", 00:08:20.129 "thread_set_cpumask", 00:08:20.129 "scheduler_set_options", 00:08:20.129 "framework_get_governor", 00:08:20.129 "framework_get_scheduler", 00:08:20.129 "framework_set_scheduler", 00:08:20.129 "framework_get_reactors", 00:08:20.129 "thread_get_io_channels", 00:08:20.129 "thread_get_pollers", 00:08:20.129 "thread_get_stats", 00:08:20.129 "framework_monitor_context_switch", 00:08:20.129 "spdk_kill_instance", 00:08:20.129 "log_enable_timestamps", 00:08:20.129 "log_get_flags", 00:08:20.129 "log_clear_flag", 00:08:20.129 "log_set_flag", 00:08:20.129 "log_get_level", 00:08:20.129 "log_set_level", 00:08:20.129 "log_get_print_level", 00:08:20.129 "log_set_print_level", 00:08:20.129 "framework_enable_cpumask_locks", 00:08:20.129 "framework_disable_cpumask_locks", 00:08:20.129 "framework_wait_init", 00:08:20.129 "framework_start_init", 00:08:20.129 "scsi_get_devices", 00:08:20.129 "bdev_get_histogram", 00:08:20.129 "bdev_enable_histogram", 00:08:20.129 "bdev_set_qos_limit", 00:08:20.129 "bdev_set_qd_sampling_period", 00:08:20.129 "bdev_get_bdevs", 00:08:20.129 "bdev_reset_iostat", 00:08:20.129 "bdev_get_iostat", 00:08:20.129 "bdev_examine", 00:08:20.129 "bdev_wait_for_examine", 00:08:20.129 "bdev_set_options", 00:08:20.129 "accel_get_stats", 00:08:20.129 "accel_set_options", 00:08:20.129 "accel_set_driver", 00:08:20.129 "accel_crypto_key_destroy", 00:08:20.129 "accel_crypto_keys_get", 00:08:20.129 "accel_crypto_key_create", 00:08:20.129 "accel_assign_opc", 00:08:20.129 "accel_get_module_info", 00:08:20.129 "accel_get_opc_assignments", 00:08:20.129 "vmd_rescan", 00:08:20.129 "vmd_remove_device", 00:08:20.129 "vmd_enable", 00:08:20.129 "sock_get_default_impl", 00:08:20.129 "sock_set_default_impl", 00:08:20.129 "sock_impl_set_options", 00:08:20.129 "sock_impl_get_options", 00:08:20.129 "iobuf_get_stats", 00:08:20.129 "iobuf_set_options", 00:08:20.129 "keyring_get_keys", 00:08:20.129 "framework_get_pci_devices", 00:08:20.129 "framework_get_config", 00:08:20.129 "framework_get_subsystems", 00:08:20.129 "fsdev_set_opts", 00:08:20.129 "fsdev_get_opts", 00:08:20.129 "trace_get_info", 00:08:20.129 "trace_get_tpoint_group_mask", 00:08:20.129 "trace_disable_tpoint_group", 00:08:20.129 "trace_enable_tpoint_group", 00:08:20.129 "trace_clear_tpoint_mask", 00:08:20.129 "trace_set_tpoint_mask", 00:08:20.129 "notify_get_notifications", 00:08:20.129 "notify_get_types", 00:08:20.129 "spdk_get_version", 00:08:20.129 "rpc_get_methods" 00:08:20.129 ] 00:08:20.129 19:26:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:20.129 19:26:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:20.129 19:26:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:20.411 19:26:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:20.411 19:26:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58119 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58119 ']' 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58119 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58119 00:08:20.411 killing process with pid 58119 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58119' 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58119 00:08:20.411 19:26:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58119 00:08:22.323 00:08:22.323 real 0m3.561s 00:08:22.323 user 0m6.119s 00:08:22.323 sys 0m0.660s 00:08:22.323 19:26:40 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.323 ************************************ 00:08:22.323 END TEST spdkcli_tcp 00:08:22.323 ************************************ 00:08:22.323 19:26:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.323 19:26:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:22.323 19:26:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:22.323 19:26:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.323 19:26:41 -- common/autotest_common.sh@10 -- # set +x 00:08:22.323 ************************************ 00:08:22.323 START TEST dpdk_mem_utility 00:08:22.323 ************************************ 00:08:22.323 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:22.323 * Looking for test storage... 00:08:22.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.324 19:26:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:22.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.324 --rc genhtml_branch_coverage=1 00:08:22.324 --rc genhtml_function_coverage=1 00:08:22.324 --rc genhtml_legend=1 00:08:22.324 --rc geninfo_all_blocks=1 00:08:22.324 --rc geninfo_unexecuted_blocks=1 00:08:22.324 00:08:22.324 ' 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:22.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.324 --rc genhtml_branch_coverage=1 00:08:22.324 --rc genhtml_function_coverage=1 00:08:22.324 --rc genhtml_legend=1 00:08:22.324 --rc geninfo_all_blocks=1 00:08:22.324 --rc geninfo_unexecuted_blocks=1 00:08:22.324 00:08:22.324 ' 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:22.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.324 --rc genhtml_branch_coverage=1 00:08:22.324 --rc genhtml_function_coverage=1 00:08:22.324 --rc genhtml_legend=1 00:08:22.324 --rc geninfo_all_blocks=1 00:08:22.324 --rc geninfo_unexecuted_blocks=1 00:08:22.324 00:08:22.324 ' 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:22.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.324 --rc genhtml_branch_coverage=1 00:08:22.324 --rc genhtml_function_coverage=1 00:08:22.324 --rc genhtml_legend=1 00:08:22.324 --rc geninfo_all_blocks=1 00:08:22.324 --rc geninfo_unexecuted_blocks=1 00:08:22.324 00:08:22.324 ' 00:08:22.324 19:26:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:22.324 19:26:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58230 00:08:22.324 19:26:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58230 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58230 ']' 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.324 19:26:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.324 19:26:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:22.324 [2024-12-05 19:26:41.303695] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:22.324 [2024-12-05 19:26:41.303872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58230 ] 00:08:22.585 [2024-12-05 19:26:41.466883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.854 [2024-12-05 19:26:41.611485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.424 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:23.424 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:08:23.424 19:26:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:23.424 19:26:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:23.424 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.424 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:23.424 { 00:08:23.424 "filename": "/tmp/spdk_mem_dump.txt" 00:08:23.424 } 00:08:23.424 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.424 19:26:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:08:23.686 DPDK memory size 824.000000 MiB in 1 heap(s) 00:08:23.686 1 heaps totaling size 824.000000 MiB 00:08:23.686 size: 824.000000 MiB heap id: 0 00:08:23.686 end heaps---------- 00:08:23.686 9 mempools totaling size 603.782043 MiB 00:08:23.686 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:23.686 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:23.686 size: 100.555481 MiB name: bdev_io_58230 00:08:23.686 size: 50.003479 MiB name: msgpool_58230 00:08:23.686 size: 36.509338 MiB name: fsdev_io_58230 00:08:23.686 size: 21.763794 MiB name: PDU_Pool 00:08:23.686 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:23.686 size: 4.133484 MiB name: evtpool_58230 00:08:23.686 size: 0.026123 MiB name: Session_Pool 00:08:23.686 end mempools------- 00:08:23.686 6 memzones totaling size 4.142822 MiB 00:08:23.686 size: 1.000366 MiB name: RG_ring_0_58230 00:08:23.686 size: 1.000366 MiB name: RG_ring_1_58230 00:08:23.686 size: 1.000366 MiB name: RG_ring_4_58230 00:08:23.686 size: 1.000366 MiB name: RG_ring_5_58230 00:08:23.686 size: 0.125366 MiB name: RG_ring_2_58230 00:08:23.686 size: 0.015991 MiB name: RG_ring_3_58230 00:08:23.686 end memzones------- 00:08:23.686 19:26:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:08:23.686 heap id: 0 total size: 824.000000 MiB number of busy elements: 323 number of free elements: 18 00:08:23.686 list of free elements. size: 16.779419 MiB 00:08:23.686 element at address: 0x200006400000 with size: 1.995972 MiB 00:08:23.686 element at address: 0x20000a600000 with size: 1.995972 MiB 00:08:23.686 element at address: 0x200003e00000 with size: 1.991028 MiB 00:08:23.686 element at address: 0x200019500040 with size: 0.999939 MiB 00:08:23.686 element at address: 0x200019900040 with size: 0.999939 MiB 00:08:23.686 element at address: 0x200019a00000 with size: 0.999084 MiB 00:08:23.686 element at address: 0x200032600000 with size: 0.994324 MiB 00:08:23.686 element at address: 0x200000400000 with size: 0.992004 MiB 00:08:23.686 element at address: 0x200019200000 with size: 0.959656 MiB 00:08:23.686 element at address: 0x200019d00040 with size: 0.936401 MiB 00:08:23.686 element at address: 0x200000200000 with size: 0.716980 MiB 00:08:23.686 element at address: 0x20001b400000 with size: 0.560730 MiB 00:08:23.686 element at address: 0x200000c00000 with size: 0.489197 MiB 00:08:23.686 element at address: 0x200019600000 with size: 0.488220 MiB 00:08:23.686 element at address: 0x200019e00000 with size: 0.485413 MiB 00:08:23.686 element at address: 0x200012c00000 with size: 0.433228 MiB 00:08:23.686 element at address: 0x200028800000 with size: 0.390442 MiB 00:08:23.686 element at address: 0x200000800000 with size: 0.350891 MiB 00:08:23.686 list of standard malloc elements. size: 199.289673 MiB 00:08:23.686 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:08:23.686 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:08:23.686 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:08:23.686 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:08:23.686 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:08:23.686 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:08:23.686 element at address: 0x200019deff40 with size: 0.062683 MiB 00:08:23.686 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:08:23.686 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:08:23.686 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:08:23.686 element at address: 0x200012bff040 with size: 0.000305 MiB 00:08:23.686 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:08:23.686 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:08:23.686 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:08:23.686 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200000cff000 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bff180 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bff280 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bff380 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bff480 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bff580 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bff680 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bff780 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bff880 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bff980 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200019affc40 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:08:23.687 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:08:23.688 element at address: 0x200028863f40 with size: 0.000244 MiB 00:08:23.688 element at address: 0x200028864040 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886af80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b080 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b180 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b280 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b380 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b480 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b580 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b680 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b780 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b880 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886b980 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886be80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c080 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c180 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c280 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c380 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c480 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c580 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c680 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c780 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c880 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886c980 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d080 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d180 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d280 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d380 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d480 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d580 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d680 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d780 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d880 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886d980 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886da80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886db80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886de80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886df80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e080 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e180 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e280 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e380 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e480 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e580 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e680 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e780 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e880 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886e980 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f080 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f180 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f280 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f380 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f480 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f580 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f680 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f780 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f880 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886f980 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:08:23.688 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:08:23.688 list of memzone associated elements. size: 607.930908 MiB 00:08:23.688 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:08:23.688 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:23.688 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:08:23.688 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:23.688 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:08:23.688 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58230_0 00:08:23.688 element at address: 0x200000dff340 with size: 48.003113 MiB 00:08:23.688 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58230_0 00:08:23.688 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:08:23.688 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58230_0 00:08:23.688 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:08:23.688 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:23.688 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:08:23.688 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:23.688 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:08:23.688 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58230_0 00:08:23.688 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:08:23.688 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58230 00:08:23.688 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:08:23.688 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58230 00:08:23.688 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:08:23.688 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:23.688 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:08:23.688 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:23.688 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:08:23.688 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:23.688 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:08:23.688 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:23.688 element at address: 0x200000cff100 with size: 1.000549 MiB 00:08:23.688 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58230 00:08:23.688 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:08:23.688 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58230 00:08:23.688 element at address: 0x200019affd40 with size: 1.000549 MiB 00:08:23.688 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58230 00:08:23.688 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:08:23.688 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58230 00:08:23.688 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:08:23.689 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58230 00:08:23.689 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:08:23.689 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58230 00:08:23.689 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:08:23.689 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:23.689 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:08:23.689 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:23.689 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:08:23.689 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:23.689 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:08:23.689 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58230 00:08:23.689 element at address: 0x20000085df80 with size: 0.125549 MiB 00:08:23.689 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58230 00:08:23.689 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:08:23.689 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:23.689 element at address: 0x200028864140 with size: 0.023804 MiB 00:08:23.689 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:23.689 element at address: 0x200000859d40 with size: 0.016174 MiB 00:08:23.689 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58230 00:08:23.689 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:08:23.689 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:23.689 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:08:23.689 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58230 00:08:23.689 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:08:23.689 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58230 00:08:23.689 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:08:23.689 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58230 00:08:23.689 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:08:23.689 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:23.689 19:26:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:23.689 19:26:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58230 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58230 ']' 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58230 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58230 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.689 killing process with pid 58230 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58230' 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58230 00:08:23.689 19:26:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58230 00:08:25.603 00:08:25.603 real 0m3.266s 00:08:25.603 user 0m3.247s 00:08:25.603 sys 0m0.556s 00:08:25.603 19:26:44 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.603 19:26:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:25.603 ************************************ 00:08:25.603 END TEST dpdk_mem_utility 00:08:25.603 ************************************ 00:08:25.603 19:26:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:25.603 19:26:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:25.603 19:26:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.603 19:26:44 -- common/autotest_common.sh@10 -- # set +x 00:08:25.603 ************************************ 00:08:25.603 START TEST event 00:08:25.603 ************************************ 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:08:25.603 * Looking for test storage... 00:08:25.603 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1711 -- # lcov --version 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:25.603 19:26:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.603 19:26:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.603 19:26:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.603 19:26:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.603 19:26:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.603 19:26:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.603 19:26:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.603 19:26:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.603 19:26:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.603 19:26:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.603 19:26:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.603 19:26:44 event -- scripts/common.sh@344 -- # case "$op" in 00:08:25.603 19:26:44 event -- scripts/common.sh@345 -- # : 1 00:08:25.603 19:26:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.603 19:26:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.603 19:26:44 event -- scripts/common.sh@365 -- # decimal 1 00:08:25.603 19:26:44 event -- scripts/common.sh@353 -- # local d=1 00:08:25.603 19:26:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.603 19:26:44 event -- scripts/common.sh@355 -- # echo 1 00:08:25.603 19:26:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.603 19:26:44 event -- scripts/common.sh@366 -- # decimal 2 00:08:25.603 19:26:44 event -- scripts/common.sh@353 -- # local d=2 00:08:25.603 19:26:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.603 19:26:44 event -- scripts/common.sh@355 -- # echo 2 00:08:25.603 19:26:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.603 19:26:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.603 19:26:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.603 19:26:44 event -- scripts/common.sh@368 -- # return 0 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:25.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.603 --rc genhtml_branch_coverage=1 00:08:25.603 --rc genhtml_function_coverage=1 00:08:25.603 --rc genhtml_legend=1 00:08:25.603 --rc geninfo_all_blocks=1 00:08:25.603 --rc geninfo_unexecuted_blocks=1 00:08:25.603 00:08:25.603 ' 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:25.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.603 --rc genhtml_branch_coverage=1 00:08:25.603 --rc genhtml_function_coverage=1 00:08:25.603 --rc genhtml_legend=1 00:08:25.603 --rc geninfo_all_blocks=1 00:08:25.603 --rc geninfo_unexecuted_blocks=1 00:08:25.603 00:08:25.603 ' 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:25.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.603 --rc genhtml_branch_coverage=1 00:08:25.603 --rc genhtml_function_coverage=1 00:08:25.603 --rc genhtml_legend=1 00:08:25.603 --rc geninfo_all_blocks=1 00:08:25.603 --rc geninfo_unexecuted_blocks=1 00:08:25.603 00:08:25.603 ' 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:25.603 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.603 --rc genhtml_branch_coverage=1 00:08:25.603 --rc genhtml_function_coverage=1 00:08:25.603 --rc genhtml_legend=1 00:08:25.603 --rc geninfo_all_blocks=1 00:08:25.603 --rc geninfo_unexecuted_blocks=1 00:08:25.603 00:08:25.603 ' 00:08:25.603 19:26:44 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:25.603 19:26:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:25.603 19:26:44 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:25.603 19:26:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.603 19:26:44 event -- common/autotest_common.sh@10 -- # set +x 00:08:25.603 ************************************ 00:08:25.603 START TEST event_perf 00:08:25.603 ************************************ 00:08:25.603 19:26:44 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:25.603 Running I/O for 1 seconds...[2024-12-05 19:26:44.597094] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:25.603 [2024-12-05 19:26:44.597252] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58327 ] 00:08:25.865 [2024-12-05 19:26:44.764035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.127 [2024-12-05 19:26:44.908206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.127 [2024-12-05 19:26:44.908466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.127 Running I/O for 1 seconds...[2024-12-05 19:26:44.908954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.127 [2024-12-05 19:26:44.908959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.072 00:08:27.072 lcore 0: 136217 00:08:27.072 lcore 1: 136218 00:08:27.072 lcore 2: 136214 00:08:27.072 lcore 3: 136217 00:08:27.073 done. 00:08:27.333 00:08:27.333 real 0m1.528s 00:08:27.333 user 0m4.299s 00:08:27.333 sys 0m0.096s 00:08:27.333 19:26:46 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.333 ************************************ 00:08:27.333 END TEST event_perf 00:08:27.333 ************************************ 00:08:27.333 19:26:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:27.333 19:26:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:27.333 19:26:46 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:27.333 19:26:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.333 19:26:46 event -- common/autotest_common.sh@10 -- # set +x 00:08:27.333 ************************************ 00:08:27.333 START TEST event_reactor 00:08:27.333 ************************************ 00:08:27.333 19:26:46 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:08:27.333 [2024-12-05 19:26:46.194611] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:27.333 [2024-12-05 19:26:46.194741] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58372 ] 00:08:27.594 [2024-12-05 19:26:46.364821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.594 [2024-12-05 19:26:46.503590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.974 test_start 00:08:28.974 oneshot 00:08:28.974 tick 100 00:08:28.974 tick 100 00:08:28.974 tick 250 00:08:28.974 tick 100 00:08:28.974 tick 100 00:08:28.974 tick 100 00:08:28.974 tick 250 00:08:28.974 tick 500 00:08:28.974 tick 100 00:08:28.974 tick 100 00:08:28.974 tick 250 00:08:28.974 tick 100 00:08:28.974 tick 100 00:08:28.974 test_end 00:08:28.974 00:08:28.974 real 0m1.529s 00:08:28.974 user 0m1.325s 00:08:28.974 sys 0m0.090s 00:08:28.974 19:26:47 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.974 ************************************ 00:08:28.974 END TEST event_reactor 00:08:28.974 ************************************ 00:08:28.974 19:26:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:28.974 19:26:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:28.974 19:26:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:28.974 19:26:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.974 19:26:47 event -- common/autotest_common.sh@10 -- # set +x 00:08:28.974 ************************************ 00:08:28.974 START TEST event_reactor_perf 00:08:28.974 ************************************ 00:08:28.974 19:26:47 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:28.974 [2024-12-05 19:26:47.790262] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:28.974 [2024-12-05 19:26:47.790406] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58414 ] 00:08:28.974 [2024-12-05 19:26:47.955354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.235 [2024-12-05 19:26:48.089609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.279 test_start 00:08:30.279 test_end 00:08:30.279 Performance: 303969 events per second 00:08:30.279 00:08:30.279 real 0m1.520s 00:08:30.279 user 0m1.329s 00:08:30.279 sys 0m0.078s 00:08:30.279 19:26:49 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.279 19:26:49 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:30.279 ************************************ 00:08:30.279 END TEST event_reactor_perf 00:08:30.279 ************************************ 00:08:30.540 19:26:49 event -- event/event.sh@49 -- # uname -s 00:08:30.540 19:26:49 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:30.540 19:26:49 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:30.540 19:26:49 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:30.540 19:26:49 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.540 19:26:49 event -- common/autotest_common.sh@10 -- # set +x 00:08:30.540 ************************************ 00:08:30.540 START TEST event_scheduler 00:08:30.540 ************************************ 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:08:30.540 * Looking for test storage... 00:08:30.540 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:30.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.540 19:26:49 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:30.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.540 --rc genhtml_branch_coverage=1 00:08:30.540 --rc genhtml_function_coverage=1 00:08:30.540 --rc genhtml_legend=1 00:08:30.540 --rc geninfo_all_blocks=1 00:08:30.540 --rc geninfo_unexecuted_blocks=1 00:08:30.540 00:08:30.540 ' 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:30.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.540 --rc genhtml_branch_coverage=1 00:08:30.540 --rc genhtml_function_coverage=1 00:08:30.540 --rc genhtml_legend=1 00:08:30.540 --rc geninfo_all_blocks=1 00:08:30.540 --rc geninfo_unexecuted_blocks=1 00:08:30.540 00:08:30.540 ' 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:30.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.540 --rc genhtml_branch_coverage=1 00:08:30.540 --rc genhtml_function_coverage=1 00:08:30.540 --rc genhtml_legend=1 00:08:30.540 --rc geninfo_all_blocks=1 00:08:30.540 --rc geninfo_unexecuted_blocks=1 00:08:30.540 00:08:30.540 ' 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:30.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:30.540 --rc genhtml_branch_coverage=1 00:08:30.540 --rc genhtml_function_coverage=1 00:08:30.540 --rc genhtml_legend=1 00:08:30.540 --rc geninfo_all_blocks=1 00:08:30.540 --rc geninfo_unexecuted_blocks=1 00:08:30.540 00:08:30.540 ' 00:08:30.540 19:26:49 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:30.540 19:26:49 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58479 00:08:30.540 19:26:49 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:30.540 19:26:49 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58479 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58479 ']' 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.540 19:26:49 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:30.540 19:26:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:30.801 [2024-12-05 19:26:49.562924] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:30.801 [2024-12-05 19:26:49.563329] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58479 ] 00:08:30.801 [2024-12-05 19:26:49.731041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:31.063 [2024-12-05 19:26:49.853751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.063 [2024-12-05 19:26:49.854177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.063 [2024-12-05 19:26:49.854262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:31.063 [2024-12-05 19:26:49.854419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:31.634 19:26:50 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.634 19:26:50 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:08:31.634 19:26:50 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:31.634 19:26:50 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.634 19:26:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:31.634 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:31.634 POWER: Cannot set governor of lcore 0 to userspace 00:08:31.634 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:31.634 POWER: Cannot set governor of lcore 0 to performance 00:08:31.634 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:31.634 POWER: Cannot set governor of lcore 0 to userspace 00:08:31.634 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:08:31.634 POWER: Cannot set governor of lcore 0 to userspace 00:08:31.634 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:08:31.634 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:08:31.634 POWER: Unable to set Power Management Environment for lcore 0 00:08:31.634 [2024-12-05 19:26:50.425446] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:08:31.634 [2024-12-05 19:26:50.425475] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:08:31.634 [2024-12-05 19:26:50.425489] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:31.634 [2024-12-05 19:26:50.425510] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:31.634 [2024-12-05 19:26:50.425520] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:31.634 [2024-12-05 19:26:50.425530] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:31.634 19:26:50 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.634 19:26:50 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:31.634 19:26:50 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.634 19:26:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 [2024-12-05 19:26:50.689105] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:31.896 19:26:50 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:31.896 19:26:50 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.896 19:26:50 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 ************************************ 00:08:31.896 START TEST scheduler_create_thread 00:08:31.896 ************************************ 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 2 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 3 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 4 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 5 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 6 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 7 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 8 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 9 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 10 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.896 19:26:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.329 19:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.329 19:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:33.329 19:26:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:33.329 19:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.329 19:26:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.765 ************************************ 00:08:34.765 END TEST scheduler_create_thread 00:08:34.765 ************************************ 00:08:34.765 19:26:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.765 00:08:34.765 real 0m2.619s 00:08:34.765 user 0m0.015s 00:08:34.765 sys 0m0.006s 00:08:34.765 19:26:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.765 19:26:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:34.765 19:26:53 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:34.765 19:26:53 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58479 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58479 ']' 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58479 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58479 00:08:34.765 killing process with pid 58479 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58479' 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58479 00:08:34.765 19:26:53 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58479 00:08:35.027 [2024-12-05 19:26:53.809546] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:35.967 00:08:35.967 real 0m5.308s 00:08:35.967 user 0m9.171s 00:08:35.967 sys 0m0.424s 00:08:35.967 19:26:54 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.967 ************************************ 00:08:35.967 END TEST event_scheduler 00:08:35.967 ************************************ 00:08:35.967 19:26:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:35.967 19:26:54 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:35.967 19:26:54 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:35.967 19:26:54 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:35.967 19:26:54 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.967 19:26:54 event -- common/autotest_common.sh@10 -- # set +x 00:08:35.967 ************************************ 00:08:35.967 START TEST app_repeat 00:08:35.967 ************************************ 00:08:35.967 19:26:54 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:08:35.967 19:26:54 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.967 19:26:54 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.967 19:26:54 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:35.967 19:26:54 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:35.967 19:26:54 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:35.967 19:26:54 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:35.967 19:26:54 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:35.968 Process app_repeat pid: 58585 00:08:35.968 spdk_app_start Round 0 00:08:35.968 19:26:54 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58585 00:08:35.968 19:26:54 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:35.968 19:26:54 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58585' 00:08:35.968 19:26:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:35.968 19:26:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:35.968 19:26:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58585 /var/tmp/spdk-nbd.sock 00:08:35.968 19:26:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58585 ']' 00:08:35.968 19:26:54 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:35.968 19:26:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:35.968 19:26:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:35.968 19:26:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:35.968 19:26:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.968 19:26:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:35.968 [2024-12-05 19:26:54.748810] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:35.968 [2024-12-05 19:26:54.748963] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58585 ] 00:08:35.968 [2024-12-05 19:26:54.911329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:36.227 [2024-12-05 19:26:55.038990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.227 [2024-12-05 19:26:55.039177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.795 19:26:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.795 19:26:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:36.795 19:26:55 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.060 Malloc0 00:08:37.060 19:26:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:37.319 Malloc1 00:08:37.319 19:26:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.319 19:26:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:37.579 /dev/nbd0 00:08:37.579 19:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:37.579 19:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:37.579 1+0 records in 00:08:37.579 1+0 records out 00:08:37.579 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000832681 s, 4.9 MB/s 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.579 19:26:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:37.580 19:26:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.580 19:26:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:37.580 19:26:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:37.580 19:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.580 19:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.580 19:26:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:37.842 /dev/nbd1 00:08:37.842 19:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:37.842 19:26:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:37.842 1+0 records in 00:08:37.842 1+0 records out 00:08:37.842 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029343 s, 14.0 MB/s 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:37.842 19:26:56 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:37.842 19:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:37.842 19:26:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:37.842 19:26:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:37.842 19:26:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:37.842 19:26:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.106 19:26:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:38.106 { 00:08:38.106 "nbd_device": "/dev/nbd0", 00:08:38.106 "bdev_name": "Malloc0" 00:08:38.106 }, 00:08:38.106 { 00:08:38.106 "nbd_device": "/dev/nbd1", 00:08:38.106 "bdev_name": "Malloc1" 00:08:38.106 } 00:08:38.106 ]' 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:38.106 { 00:08:38.106 "nbd_device": "/dev/nbd0", 00:08:38.106 "bdev_name": "Malloc0" 00:08:38.106 }, 00:08:38.106 { 00:08:38.106 "nbd_device": "/dev/nbd1", 00:08:38.106 "bdev_name": "Malloc1" 00:08:38.106 } 00:08:38.106 ]' 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:38.106 /dev/nbd1' 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:38.106 /dev/nbd1' 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:38.106 256+0 records in 00:08:38.106 256+0 records out 00:08:38.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010273 s, 102 MB/s 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:38.106 256+0 records in 00:08:38.106 256+0 records out 00:08:38.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209583 s, 50.0 MB/s 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:38.106 19:26:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:38.106 256+0 records in 00:08:38.106 256+0 records out 00:08:38.106 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277025 s, 37.9 MB/s 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:38.369 19:26:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:38.630 19:26:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:38.891 19:26:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:38.891 19:26:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:39.464 19:26:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:40.405 [2024-12-05 19:26:59.062254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:40.405 [2024-12-05 19:26:59.182116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.405 [2024-12-05 19:26:59.182145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.405 [2024-12-05 19:26:59.325269] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:40.405 [2024-12-05 19:26:59.325365] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:42.349 spdk_app_start Round 1 00:08:42.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:42.349 19:27:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:42.349 19:27:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:42.349 19:27:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58585 /var/tmp/spdk-nbd.sock 00:08:42.349 19:27:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58585 ']' 00:08:42.349 19:27:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:42.349 19:27:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.349 19:27:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:42.349 19:27:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.349 19:27:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:42.608 19:27:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.608 19:27:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:42.608 19:27:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:42.883 Malloc0 00:08:42.883 19:27:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:43.145 Malloc1 00:08:43.145 19:27:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:43.145 19:27:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:43.404 /dev/nbd0 00:08:43.404 19:27:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:43.404 19:27:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:43.404 1+0 records in 00:08:43.404 1+0 records out 00:08:43.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000650548 s, 6.3 MB/s 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:43.404 19:27:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:43.404 19:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.404 19:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:43.404 19:27:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:43.663 /dev/nbd1 00:08:43.663 19:27:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:43.663 19:27:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:43.663 1+0 records in 00:08:43.663 1+0 records out 00:08:43.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000666237 s, 6.1 MB/s 00:08:43.663 19:27:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:43.924 19:27:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:43.924 19:27:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:43.924 19:27:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:43.924 19:27:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:43.924 19:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:43.924 19:27:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:43.924 19:27:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:43.924 19:27:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:43.924 19:27:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:43.924 19:27:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:43.924 { 00:08:43.924 "nbd_device": "/dev/nbd0", 00:08:43.924 "bdev_name": "Malloc0" 00:08:43.924 }, 00:08:43.924 { 00:08:43.924 "nbd_device": "/dev/nbd1", 00:08:43.924 "bdev_name": "Malloc1" 00:08:43.924 } 00:08:43.924 ]' 00:08:44.261 19:27:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:44.261 { 00:08:44.261 "nbd_device": "/dev/nbd0", 00:08:44.261 "bdev_name": "Malloc0" 00:08:44.261 }, 00:08:44.261 { 00:08:44.261 "nbd_device": "/dev/nbd1", 00:08:44.261 "bdev_name": "Malloc1" 00:08:44.261 } 00:08:44.261 ]' 00:08:44.261 19:27:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:44.261 19:27:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:44.261 /dev/nbd1' 00:08:44.261 19:27:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:44.261 /dev/nbd1' 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:44.262 256+0 records in 00:08:44.262 256+0 records out 00:08:44.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053089 s, 198 MB/s 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.262 19:27:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:44.262 256+0 records in 00:08:44.262 256+0 records out 00:08:44.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0280365 s, 37.4 MB/s 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:44.262 256+0 records in 00:08:44.262 256+0 records out 00:08:44.262 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0310762 s, 33.7 MB/s 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.262 19:27:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:44.522 19:27:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:44.522 19:27:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:44.522 19:27:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:44.522 19:27:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.523 19:27:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.523 19:27:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:44.523 19:27:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:44.523 19:27:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.523 19:27:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:44.523 19:27:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:44.783 19:27:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:45.045 19:27:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:45.045 19:27:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:45.306 19:27:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:46.254 [2024-12-05 19:27:05.044356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:46.254 [2024-12-05 19:27:05.172902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.254 [2024-12-05 19:27:05.173218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.517 [2024-12-05 19:27:05.320664] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:46.517 [2024-12-05 19:27:05.320777] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:48.426 spdk_app_start Round 2 00:08:48.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:48.426 19:27:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:48.426 19:27:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:48.426 19:27:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58585 /var/tmp/spdk-nbd.sock 00:08:48.426 19:27:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58585 ']' 00:08:48.426 19:27:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:48.426 19:27:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:48.426 19:27:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:48.426 19:27:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:48.426 19:27:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:48.687 19:27:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.687 19:27:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:48.687 19:27:07 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:48.947 Malloc0 00:08:48.947 19:27:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:49.209 Malloc1 00:08:49.209 19:27:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:49.209 19:27:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:49.209 /dev/nbd0 00:08:49.469 19:27:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:49.469 19:27:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:49.469 1+0 records in 00:08:49.469 1+0 records out 00:08:49.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000592243 s, 6.9 MB/s 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:49.469 19:27:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:49.469 19:27:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.469 19:27:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:49.469 19:27:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:49.469 /dev/nbd1 00:08:49.729 19:27:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:49.729 19:27:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:49.729 1+0 records in 00:08:49.729 1+0 records out 00:08:49.729 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291368 s, 14.1 MB/s 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:49.729 19:27:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:08:49.729 19:27:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:49.729 19:27:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:49.729 19:27:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:49.729 19:27:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:49.729 19:27:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:49.989 { 00:08:49.989 "nbd_device": "/dev/nbd0", 00:08:49.989 "bdev_name": "Malloc0" 00:08:49.989 }, 00:08:49.989 { 00:08:49.989 "nbd_device": "/dev/nbd1", 00:08:49.989 "bdev_name": "Malloc1" 00:08:49.989 } 00:08:49.989 ]' 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:49.989 { 00:08:49.989 "nbd_device": "/dev/nbd0", 00:08:49.989 "bdev_name": "Malloc0" 00:08:49.989 }, 00:08:49.989 { 00:08:49.989 "nbd_device": "/dev/nbd1", 00:08:49.989 "bdev_name": "Malloc1" 00:08:49.989 } 00:08:49.989 ]' 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:49.989 /dev/nbd1' 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:49.989 /dev/nbd1' 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:49.989 256+0 records in 00:08:49.989 256+0 records out 00:08:49.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00719493 s, 146 MB/s 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:49.989 256+0 records in 00:08:49.989 256+0 records out 00:08:49.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0178982 s, 58.6 MB/s 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:49.989 256+0 records in 00:08:49.989 256+0 records out 00:08:49.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0951718 s, 11.0 MB/s 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:49.989 19:27:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:08:50.249 19:27:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:50.249 19:27:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:08:50.249 19:27:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.249 19:27:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.510 19:27:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:50.772 19:27:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:50.772 19:27:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:51.347 19:27:10 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:51.919 [2024-12-05 19:27:10.872120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:52.179 [2024-12-05 19:27:11.011917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.179 [2024-12-05 19:27:11.012188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.179 [2024-12-05 19:27:11.173060] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:52.179 [2024-12-05 19:27:11.173201] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:54.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:54.091 19:27:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58585 /var/tmp/spdk-nbd.sock 00:08:54.091 19:27:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58585 ']' 00:08:54.091 19:27:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:54.091 19:27:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.091 19:27:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:54.091 19:27:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.091 19:27:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:08:54.351 19:27:13 event.app_repeat -- event/event.sh@39 -- # killprocess 58585 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58585 ']' 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58585 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58585 00:08:54.351 killing process with pid 58585 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58585' 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58585 00:08:54.351 19:27:13 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58585 00:08:55.291 spdk_app_start is called in Round 0. 00:08:55.291 Shutdown signal received, stop current app iteration 00:08:55.291 Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 reinitialization... 00:08:55.291 spdk_app_start is called in Round 1. 00:08:55.291 Shutdown signal received, stop current app iteration 00:08:55.291 Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 reinitialization... 00:08:55.291 spdk_app_start is called in Round 2. 00:08:55.291 Shutdown signal received, stop current app iteration 00:08:55.291 Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 reinitialization... 00:08:55.291 spdk_app_start is called in Round 3. 00:08:55.291 Shutdown signal received, stop current app iteration 00:08:55.291 ************************************ 00:08:55.291 END TEST app_repeat 00:08:55.291 ************************************ 00:08:55.291 19:27:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:55.291 19:27:14 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:55.291 00:08:55.291 real 0m19.385s 00:08:55.291 user 0m42.117s 00:08:55.291 sys 0m2.681s 00:08:55.291 19:27:14 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:55.291 19:27:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:55.291 19:27:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:55.291 19:27:14 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:55.291 19:27:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.291 19:27:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.291 19:27:14 event -- common/autotest_common.sh@10 -- # set +x 00:08:55.291 ************************************ 00:08:55.291 START TEST cpu_locks 00:08:55.291 ************************************ 00:08:55.291 19:27:14 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:08:55.291 * Looking for test storage... 00:08:55.291 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:08:55.291 19:27:14 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:55.291 19:27:14 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:08:55.291 19:27:14 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:55.552 19:27:14 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:55.552 19:27:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:55.552 19:27:14 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:55.552 19:27:14 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:55.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.552 --rc genhtml_branch_coverage=1 00:08:55.552 --rc genhtml_function_coverage=1 00:08:55.552 --rc genhtml_legend=1 00:08:55.552 --rc geninfo_all_blocks=1 00:08:55.552 --rc geninfo_unexecuted_blocks=1 00:08:55.552 00:08:55.552 ' 00:08:55.552 19:27:14 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:55.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.552 --rc genhtml_branch_coverage=1 00:08:55.552 --rc genhtml_function_coverage=1 00:08:55.552 --rc genhtml_legend=1 00:08:55.552 --rc geninfo_all_blocks=1 00:08:55.552 --rc geninfo_unexecuted_blocks=1 00:08:55.552 00:08:55.552 ' 00:08:55.552 19:27:14 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:55.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.552 --rc genhtml_branch_coverage=1 00:08:55.552 --rc genhtml_function_coverage=1 00:08:55.552 --rc genhtml_legend=1 00:08:55.552 --rc geninfo_all_blocks=1 00:08:55.552 --rc geninfo_unexecuted_blocks=1 00:08:55.552 00:08:55.552 ' 00:08:55.552 19:27:14 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:55.552 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:55.552 --rc genhtml_branch_coverage=1 00:08:55.552 --rc genhtml_function_coverage=1 00:08:55.552 --rc genhtml_legend=1 00:08:55.552 --rc geninfo_all_blocks=1 00:08:55.552 --rc geninfo_unexecuted_blocks=1 00:08:55.552 00:08:55.552 ' 00:08:55.552 19:27:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:55.552 19:27:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:55.552 19:27:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:55.552 19:27:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:55.552 19:27:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:55.552 19:27:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:55.552 19:27:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:55.552 ************************************ 00:08:55.552 START TEST default_locks 00:08:55.552 ************************************ 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59034 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59034 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59034 ']' 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:55.552 19:27:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:55.552 [2024-12-05 19:27:14.443210] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:55.552 [2024-12-05 19:27:14.443362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59034 ] 00:08:55.829 [2024-12-05 19:27:14.605024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.829 [2024-12-05 19:27:14.745464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59034 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59034 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59034 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59034 ']' 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59034 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59034 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.768 killing process with pid 59034 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59034' 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59034 00:08:56.768 19:27:15 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59034 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59034 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59034 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59034 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59034 ']' 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.679 ERROR: process (pid: 59034) is no longer running 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.679 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59034) - No such process 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:58.679 00:08:58.679 real 0m3.050s 00:08:58.679 user 0m2.981s 00:08:58.679 sys 0m0.587s 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.679 ************************************ 00:08:58.679 END TEST default_locks 00:08:58.679 ************************************ 00:08:58.679 19:27:17 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.679 19:27:17 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:58.679 19:27:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.679 19:27:17 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.679 19:27:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.679 ************************************ 00:08:58.679 START TEST default_locks_via_rpc 00:08:58.679 ************************************ 00:08:58.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.679 19:27:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:08:58.680 19:27:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59102 00:08:58.680 19:27:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59102 00:08:58.680 19:27:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59102 ']' 00:08:58.680 19:27:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.680 19:27:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:58.680 19:27:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.680 19:27:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:08:58.680 19:27:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:58.680 19:27:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:58.680 [2024-12-05 19:27:17.546783] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:08:58.680 [2024-12-05 19:27:17.546922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59102 ] 00:08:58.940 [2024-12-05 19:27:17.708227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.940 [2024-12-05 19:27:17.834540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59102 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59102 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59102 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59102 ']' 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59102 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59102 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.883 killing process with pid 59102 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59102' 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59102 00:08:59.883 19:27:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59102 00:09:01.799 00:09:01.799 real 0m2.957s 00:09:01.799 user 0m2.891s 00:09:01.799 sys 0m0.525s 00:09:01.799 19:27:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.799 ************************************ 00:09:01.799 END TEST default_locks_via_rpc 00:09:01.799 ************************************ 00:09:01.799 19:27:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.799 19:27:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:01.799 19:27:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.799 19:27:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.799 19:27:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:01.799 ************************************ 00:09:01.799 START TEST non_locking_app_on_locked_coremask 00:09:01.799 ************************************ 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59160 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59160 /var/tmp/spdk.sock 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59160 ']' 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:01.799 19:27:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:01.799 [2024-12-05 19:27:20.598388] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:01.799 [2024-12-05 19:27:20.598556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59160 ] 00:09:01.799 [2024-12-05 19:27:20.767492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.060 [2024-12-05 19:27:20.891740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59181 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59181 /var/tmp/spdk2.sock 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59181 ']' 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.632 19:27:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.893 [2024-12-05 19:27:21.649319] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:02.893 [2024-12-05 19:27:21.649459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59181 ] 00:09:02.893 [2024-12-05 19:27:21.831176] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:02.893 [2024-12-05 19:27:21.831271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.155 [2024-12-05 19:27:22.065403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59160 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59160 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59160 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59160 ']' 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59160 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.542 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59160 00:09:04.803 killing process with pid 59160 00:09:04.803 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:04.804 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:04.804 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59160' 00:09:04.804 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59160 00:09:04.804 19:27:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59160 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59181 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59181 ']' 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59181 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59181 00:09:08.102 killing process with pid 59181 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59181' 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59181 00:09:08.102 19:27:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59181 00:09:09.484 ************************************ 00:09:09.484 END TEST non_locking_app_on_locked_coremask 00:09:09.484 ************************************ 00:09:09.484 00:09:09.484 real 0m7.883s 00:09:09.484 user 0m8.006s 00:09:09.484 sys 0m0.978s 00:09:09.484 19:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:09.484 19:27:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:09.484 19:27:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:09:09.484 19:27:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:09.484 19:27:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:09.484 19:27:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:09.484 ************************************ 00:09:09.484 START TEST locking_app_on_unlocked_coremask 00:09:09.484 ************************************ 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59288 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59288 /var/tmp/spdk.sock 00:09:09.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59288 ']' 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:09.484 19:27:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:09:09.766 [2024-12-05 19:27:28.545460] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:09.766 [2024-12-05 19:27:28.545849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59288 ] 00:09:09.766 [2024-12-05 19:27:28.709140] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:09.766 [2024-12-05 19:27:28.709198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.027 [2024-12-05 19:27:28.848703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59304 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59304 /var/tmp/spdk2.sock 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59304 ']' 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.598 19:27:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:10.861 [2024-12-05 19:27:29.670912] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:10.861 [2024-12-05 19:27:29.671564] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59304 ] 00:09:10.861 [2024-12-05 19:27:29.854379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.435 [2024-12-05 19:27:30.134779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.353 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.353 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:13.353 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59304 00:09:13.353 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59304 00:09:13.353 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59288 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59288 ']' 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59288 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59288 00:09:13.926 killing process with pid 59288 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59288' 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59288 00:09:13.926 19:27:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59288 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59304 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59304 ']' 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59304 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59304 00:09:17.249 killing process with pid 59304 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59304' 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59304 00:09:17.249 19:27:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59304 00:09:19.165 ************************************ 00:09:19.165 END TEST locking_app_on_unlocked_coremask 00:09:19.165 ************************************ 00:09:19.165 00:09:19.165 real 0m9.406s 00:09:19.165 user 0m9.660s 00:09:19.165 sys 0m1.176s 00:09:19.165 19:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.165 19:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.165 19:27:37 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:19.165 19:27:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:19.165 19:27:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.165 19:27:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:19.165 ************************************ 00:09:19.165 START TEST locking_app_on_locked_coremask 00:09:19.166 ************************************ 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59436 00:09:19.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59436 /var/tmp/spdk.sock 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59436 ']' 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.166 19:27:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:19.166 [2024-12-05 19:27:38.021431] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:19.166 [2024-12-05 19:27:38.021594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59436 ] 00:09:19.426 [2024-12-05 19:27:38.184459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.426 [2024-12-05 19:27:38.316546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59452 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59452 /var/tmp/spdk2.sock 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59452 /var/tmp/spdk2.sock 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:20.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59452 /var/tmp/spdk2.sock 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59452 ']' 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.368 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:20.368 [2024-12-05 19:27:39.218863] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:20.368 [2024-12-05 19:27:39.219014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59452 ] 00:09:20.687 [2024-12-05 19:27:39.404949] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59436 has claimed it. 00:09:20.687 [2024-12-05 19:27:39.405048] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:20.946 ERROR: process (pid: 59452) is no longer running 00:09:20.946 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59452) - No such process 00:09:20.946 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.946 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:20.946 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:20.946 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:20.946 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:20.946 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:20.946 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59436 00:09:20.946 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59436 00:09:20.946 19:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59436 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59436 ']' 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59436 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59436 00:09:21.207 killing process with pid 59436 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59436' 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59436 00:09:21.207 19:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59436 00:09:23.114 00:09:23.114 real 0m4.089s 00:09:23.114 user 0m4.211s 00:09:23.114 sys 0m0.725s 00:09:23.114 19:27:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.114 ************************************ 00:09:23.114 END TEST locking_app_on_locked_coremask 00:09:23.114 ************************************ 00:09:23.115 19:27:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:23.115 19:27:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:23.115 19:27:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.115 19:27:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.115 19:27:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:23.115 ************************************ 00:09:23.115 START TEST locking_overlapped_coremask 00:09:23.115 ************************************ 00:09:23.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59516 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59516 /var/tmp/spdk.sock 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59516 ']' 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.115 19:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:23.394 [2024-12-05 19:27:42.219482] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:23.394 [2024-12-05 19:27:42.219703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59516 ] 00:09:23.653 [2024-12-05 19:27:42.407265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.653 [2024-12-05 19:27:42.566461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.653 [2024-12-05 19:27:42.566577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.653 [2024-12-05 19:27:42.566587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59534 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59534 /var/tmp/spdk2.sock 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59534 /var/tmp/spdk2.sock 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:24.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59534 /var/tmp/spdk2.sock 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59534 ']' 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.597 19:27:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:24.597 [2024-12-05 19:27:43.447324] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:24.597 [2024-12-05 19:27:43.447911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59534 ] 00:09:24.860 [2024-12-05 19:27:43.631166] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59516 has claimed it. 00:09:24.860 [2024-12-05 19:27:43.631260] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:25.122 ERROR: process (pid: 59534) is no longer running 00:09:25.122 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59534) - No such process 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59516 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59516 ']' 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59516 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59516 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59516' 00:09:25.122 killing process with pid 59516 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59516 00:09:25.122 19:27:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59516 00:09:27.037 00:09:27.037 real 0m3.892s 00:09:27.037 user 0m10.209s 00:09:27.037 sys 0m0.663s 00:09:27.037 19:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.037 ************************************ 00:09:27.037 19:27:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:27.037 END TEST locking_overlapped_coremask 00:09:27.037 ************************************ 00:09:27.037 19:27:46 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:27.037 19:27:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:27.037 19:27:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.037 19:27:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:27.298 ************************************ 00:09:27.298 START TEST locking_overlapped_coremask_via_rpc 00:09:27.298 ************************************ 00:09:27.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59587 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59587 /var/tmp/spdk.sock 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59587 ']' 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.298 19:27:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:27.298 [2024-12-05 19:27:46.151430] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:27.298 [2024-12-05 19:27:46.151590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59587 ] 00:09:27.559 [2024-12-05 19:27:46.315597] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:27.559 [2024-12-05 19:27:46.315683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.559 [2024-12-05 19:27:46.463440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.559 [2024-12-05 19:27:46.463828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.559 [2024-12-05 19:27:46.463853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59616 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59616 /var/tmp/spdk2.sock 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59616 ']' 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:28.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:28.504 19:27:47 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:28.504 [2024-12-05 19:27:47.367103] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:28.504 [2024-12-05 19:27:47.367890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59616 ] 00:09:28.764 [2024-12-05 19:27:47.551462] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:28.764 [2024-12-05 19:27:47.551556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:29.024 [2024-12-05 19:27:47.783850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:29.024 [2024-12-05 19:27:47.783911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:29.024 [2024-12-05 19:27:47.783942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:29.971 [2024-12-05 19:27:48.947284] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59587 has claimed it. 00:09:29.971 request: 00:09:29.971 { 00:09:29.971 "method": "framework_enable_cpumask_locks", 00:09:29.971 "req_id": 1 00:09:29.971 } 00:09:29.971 Got JSON-RPC error response 00:09:29.971 response: 00:09:29.971 { 00:09:29.971 "code": -32603, 00:09:29.971 "message": "Failed to claim CPU core: 2" 00:09:29.971 } 00:09:29.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59587 /var/tmp/spdk.sock 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59587 ']' 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.971 19:27:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:30.234 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.234 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:30.234 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59616 /var/tmp/spdk2.sock 00:09:30.234 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59616 ']' 00:09:30.234 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:30.234 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:30.234 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:30.234 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:30.234 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.495 ************************************ 00:09:30.495 END TEST locking_overlapped_coremask_via_rpc 00:09:30.495 ************************************ 00:09:30.495 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.495 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:30.495 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:30.495 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:30.495 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:30.495 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:30.495 00:09:30.495 real 0m3.332s 00:09:30.495 user 0m1.132s 00:09:30.495 sys 0m0.149s 00:09:30.495 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.495 19:27:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.495 19:27:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:30.495 19:27:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59587 ]] 00:09:30.495 19:27:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59587 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59587 ']' 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59587 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59587 00:09:30.495 killing process with pid 59587 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59587' 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59587 00:09:30.495 19:27:49 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59587 00:09:32.412 19:27:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59616 ]] 00:09:32.412 19:27:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59616 00:09:32.412 19:27:50 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59616 ']' 00:09:32.412 19:27:50 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59616 00:09:32.412 19:27:50 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:09:32.412 19:27:50 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:32.412 19:27:50 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59616 00:09:32.412 killing process with pid 59616 00:09:32.412 19:27:51 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:32.412 19:27:51 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:32.412 19:27:51 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59616' 00:09:32.412 19:27:51 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59616 00:09:32.412 19:27:51 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59616 00:09:33.817 19:27:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:33.817 19:27:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:33.817 19:27:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59587 ]] 00:09:33.817 Process with pid 59587 is not found 00:09:33.817 Process with pid 59616 is not found 00:09:33.817 19:27:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59587 00:09:33.817 19:27:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59587 ']' 00:09:33.817 19:27:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59587 00:09:33.817 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59587) - No such process 00:09:33.817 19:27:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59587 is not found' 00:09:33.817 19:27:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59616 ]] 00:09:33.817 19:27:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59616 00:09:33.817 19:27:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59616 ']' 00:09:33.817 19:27:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59616 00:09:33.817 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59616) - No such process 00:09:33.817 19:27:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59616 is not found' 00:09:33.817 19:27:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:33.817 ************************************ 00:09:33.817 END TEST cpu_locks 00:09:33.817 ************************************ 00:09:33.817 00:09:33.817 real 0m38.367s 00:09:33.817 user 1m2.826s 00:09:33.817 sys 0m5.827s 00:09:33.817 19:27:52 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.817 19:27:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:33.817 ************************************ 00:09:33.817 END TEST event 00:09:33.817 ************************************ 00:09:33.817 00:09:33.817 real 1m8.164s 00:09:33.817 user 2m1.230s 00:09:33.817 sys 0m9.462s 00:09:33.817 19:27:52 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.817 19:27:52 event -- common/autotest_common.sh@10 -- # set +x 00:09:33.817 19:27:52 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:33.817 19:27:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.817 19:27:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.817 19:27:52 -- common/autotest_common.sh@10 -- # set +x 00:09:33.817 ************************************ 00:09:33.817 START TEST thread 00:09:33.817 ************************************ 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:09:33.817 * Looking for test storage... 00:09:33.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:33.817 19:27:52 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:33.817 19:27:52 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:33.817 19:27:52 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:33.817 19:27:52 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:33.817 19:27:52 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:33.817 19:27:52 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:33.817 19:27:52 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:33.817 19:27:52 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:33.817 19:27:52 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:33.817 19:27:52 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:33.817 19:27:52 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:33.817 19:27:52 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:33.817 19:27:52 thread -- scripts/common.sh@345 -- # : 1 00:09:33.817 19:27:52 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:33.817 19:27:52 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:33.817 19:27:52 thread -- scripts/common.sh@365 -- # decimal 1 00:09:33.817 19:27:52 thread -- scripts/common.sh@353 -- # local d=1 00:09:33.817 19:27:52 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:33.817 19:27:52 thread -- scripts/common.sh@355 -- # echo 1 00:09:33.817 19:27:52 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:33.817 19:27:52 thread -- scripts/common.sh@366 -- # decimal 2 00:09:33.817 19:27:52 thread -- scripts/common.sh@353 -- # local d=2 00:09:33.817 19:27:52 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:33.817 19:27:52 thread -- scripts/common.sh@355 -- # echo 2 00:09:33.817 19:27:52 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:33.817 19:27:52 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:33.817 19:27:52 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:33.817 19:27:52 thread -- scripts/common.sh@368 -- # return 0 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:33.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.817 --rc genhtml_branch_coverage=1 00:09:33.817 --rc genhtml_function_coverage=1 00:09:33.817 --rc genhtml_legend=1 00:09:33.817 --rc geninfo_all_blocks=1 00:09:33.817 --rc geninfo_unexecuted_blocks=1 00:09:33.817 00:09:33.817 ' 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:33.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.817 --rc genhtml_branch_coverage=1 00:09:33.817 --rc genhtml_function_coverage=1 00:09:33.817 --rc genhtml_legend=1 00:09:33.817 --rc geninfo_all_blocks=1 00:09:33.817 --rc geninfo_unexecuted_blocks=1 00:09:33.817 00:09:33.817 ' 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:33.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.817 --rc genhtml_branch_coverage=1 00:09:33.817 --rc genhtml_function_coverage=1 00:09:33.817 --rc genhtml_legend=1 00:09:33.817 --rc geninfo_all_blocks=1 00:09:33.817 --rc geninfo_unexecuted_blocks=1 00:09:33.817 00:09:33.817 ' 00:09:33.817 19:27:52 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:33.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:33.817 --rc genhtml_branch_coverage=1 00:09:33.817 --rc genhtml_function_coverage=1 00:09:33.817 --rc genhtml_legend=1 00:09:33.817 --rc geninfo_all_blocks=1 00:09:33.817 --rc geninfo_unexecuted_blocks=1 00:09:33.817 00:09:33.817 ' 00:09:33.818 19:27:52 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:33.818 19:27:52 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:33.818 19:27:52 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.818 19:27:52 thread -- common/autotest_common.sh@10 -- # set +x 00:09:33.818 ************************************ 00:09:33.818 START TEST thread_poller_perf 00:09:33.818 ************************************ 00:09:33.818 19:27:52 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:33.818 [2024-12-05 19:27:52.795189] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:33.818 [2024-12-05 19:27:52.795387] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59776 ] 00:09:34.078 [2024-12-05 19:27:52.946837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.078 [2024-12-05 19:27:53.043275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.078 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:35.461 [2024-12-05T19:27:54.467Z] ====================================== 00:09:35.461 [2024-12-05T19:27:54.467Z] busy:2608164730 (cyc) 00:09:35.461 [2024-12-05T19:27:54.467Z] total_run_count: 302000 00:09:35.461 [2024-12-05T19:27:54.467Z] tsc_hz: 2600000000 (cyc) 00:09:35.461 [2024-12-05T19:27:54.467Z] ====================================== 00:09:35.461 [2024-12-05T19:27:54.467Z] poller_cost: 8636 (cyc), 3321 (nsec) 00:09:35.461 00:09:35.461 real 0m1.444s 00:09:35.461 user 0m1.266s 00:09:35.461 sys 0m0.070s 00:09:35.461 19:27:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:35.461 19:27:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:35.461 ************************************ 00:09:35.461 END TEST thread_poller_perf 00:09:35.461 ************************************ 00:09:35.461 19:27:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:35.461 19:27:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:09:35.461 19:27:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.461 19:27:54 thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.461 ************************************ 00:09:35.461 START TEST thread_poller_perf 00:09:35.461 ************************************ 00:09:35.461 19:27:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:35.461 [2024-12-05 19:27:54.282617] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:35.461 [2024-12-05 19:27:54.282836] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59812 ] 00:09:35.461 [2024-12-05 19:27:54.436543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.719 [2024-12-05 19:27:54.534107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.719 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:37.101 [2024-12-05T19:27:56.107Z] ====================================== 00:09:37.101 [2024-12-05T19:27:56.107Z] busy:2603652400 (cyc) 00:09:37.101 [2024-12-05T19:27:56.107Z] total_run_count: 3636000 00:09:37.101 [2024-12-05T19:27:56.107Z] tsc_hz: 2600000000 (cyc) 00:09:37.101 [2024-12-05T19:27:56.107Z] ====================================== 00:09:37.101 [2024-12-05T19:27:56.107Z] poller_cost: 716 (cyc), 275 (nsec) 00:09:37.101 00:09:37.101 real 0m1.438s 00:09:37.101 user 0m1.256s 00:09:37.101 sys 0m0.074s 00:09:37.102 19:27:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.102 ************************************ 00:09:37.102 END TEST thread_poller_perf 00:09:37.102 ************************************ 00:09:37.102 19:27:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:37.102 19:27:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:37.102 00:09:37.102 real 0m3.135s 00:09:37.102 user 0m2.650s 00:09:37.102 sys 0m0.268s 00:09:37.102 ************************************ 00:09:37.102 END TEST thread 00:09:37.102 ************************************ 00:09:37.102 19:27:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.102 19:27:55 thread -- common/autotest_common.sh@10 -- # set +x 00:09:37.102 19:27:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:37.102 19:27:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:37.102 19:27:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:37.102 19:27:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.102 19:27:55 -- common/autotest_common.sh@10 -- # set +x 00:09:37.102 ************************************ 00:09:37.102 START TEST app_cmdline 00:09:37.102 ************************************ 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:09:37.102 * Looking for test storage... 00:09:37.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:37.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.102 19:27:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.102 --rc genhtml_branch_coverage=1 00:09:37.102 --rc genhtml_function_coverage=1 00:09:37.102 --rc genhtml_legend=1 00:09:37.102 --rc geninfo_all_blocks=1 00:09:37.102 --rc geninfo_unexecuted_blocks=1 00:09:37.102 00:09:37.102 ' 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.102 --rc genhtml_branch_coverage=1 00:09:37.102 --rc genhtml_function_coverage=1 00:09:37.102 --rc genhtml_legend=1 00:09:37.102 --rc geninfo_all_blocks=1 00:09:37.102 --rc geninfo_unexecuted_blocks=1 00:09:37.102 00:09:37.102 ' 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.102 --rc genhtml_branch_coverage=1 00:09:37.102 --rc genhtml_function_coverage=1 00:09:37.102 --rc genhtml_legend=1 00:09:37.102 --rc geninfo_all_blocks=1 00:09:37.102 --rc geninfo_unexecuted_blocks=1 00:09:37.102 00:09:37.102 ' 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.102 --rc genhtml_branch_coverage=1 00:09:37.102 --rc genhtml_function_coverage=1 00:09:37.102 --rc genhtml_legend=1 00:09:37.102 --rc geninfo_all_blocks=1 00:09:37.102 --rc geninfo_unexecuted_blocks=1 00:09:37.102 00:09:37.102 ' 00:09:37.102 19:27:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:37.102 19:27:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59896 00:09:37.102 19:27:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59896 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59896 ']' 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.102 19:27:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:37.102 19:27:55 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:37.102 [2024-12-05 19:27:55.968146] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:37.102 [2024-12-05 19:27:55.968264] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59896 ] 00:09:37.362 [2024-12-05 19:27:56.126017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.362 [2024-12-05 19:27:56.233487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.933 19:27:56 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.933 19:27:56 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:09:37.933 19:27:56 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:09:38.195 { 00:09:38.195 "version": "SPDK v25.01-pre git sha1 3c8001115", 00:09:38.195 "fields": { 00:09:38.195 "major": 25, 00:09:38.195 "minor": 1, 00:09:38.195 "patch": 0, 00:09:38.195 "suffix": "-pre", 00:09:38.195 "commit": "3c8001115" 00:09:38.195 } 00:09:38.195 } 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:38.195 19:27:57 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:38.195 19:27:57 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:38.456 request: 00:09:38.456 { 00:09:38.456 "method": "env_dpdk_get_mem_stats", 00:09:38.456 "req_id": 1 00:09:38.456 } 00:09:38.456 Got JSON-RPC error response 00:09:38.456 response: 00:09:38.456 { 00:09:38.456 "code": -32601, 00:09:38.456 "message": "Method not found" 00:09:38.456 } 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:38.456 19:27:57 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59896 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59896 ']' 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59896 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59896 00:09:38.456 killing process with pid 59896 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59896' 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@973 -- # kill 59896 00:09:38.456 19:27:57 app_cmdline -- common/autotest_common.sh@978 -- # wait 59896 00:09:40.374 00:09:40.374 real 0m3.181s 00:09:40.374 user 0m3.485s 00:09:40.374 sys 0m0.444s 00:09:40.374 19:27:58 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.374 ************************************ 00:09:40.374 END TEST app_cmdline 00:09:40.374 ************************************ 00:09:40.374 19:27:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:40.374 19:27:58 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:40.374 19:27:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.374 19:27:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.374 19:27:58 -- common/autotest_common.sh@10 -- # set +x 00:09:40.374 ************************************ 00:09:40.374 START TEST version 00:09:40.374 ************************************ 00:09:40.374 19:27:58 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:09:40.374 * Looking for test storage... 00:09:40.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:09:40.374 19:27:59 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.374 19:27:59 version -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.374 19:27:59 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.374 19:27:59 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.374 19:27:59 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.374 19:27:59 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.374 19:27:59 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.374 19:27:59 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.374 19:27:59 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.374 19:27:59 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.374 19:27:59 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.374 19:27:59 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.374 19:27:59 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.374 19:27:59 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.374 19:27:59 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.374 19:27:59 version -- scripts/common.sh@344 -- # case "$op" in 00:09:40.374 19:27:59 version -- scripts/common.sh@345 -- # : 1 00:09:40.374 19:27:59 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.374 19:27:59 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.374 19:27:59 version -- scripts/common.sh@365 -- # decimal 1 00:09:40.374 19:27:59 version -- scripts/common.sh@353 -- # local d=1 00:09:40.374 19:27:59 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.374 19:27:59 version -- scripts/common.sh@355 -- # echo 1 00:09:40.374 19:27:59 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.374 19:27:59 version -- scripts/common.sh@366 -- # decimal 2 00:09:40.374 19:27:59 version -- scripts/common.sh@353 -- # local d=2 00:09:40.374 19:27:59 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.374 19:27:59 version -- scripts/common.sh@355 -- # echo 2 00:09:40.374 19:27:59 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.374 19:27:59 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.374 19:27:59 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.374 19:27:59 version -- scripts/common.sh@368 -- # return 0 00:09:40.374 19:27:59 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.374 19:27:59 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.374 --rc genhtml_branch_coverage=1 00:09:40.374 --rc genhtml_function_coverage=1 00:09:40.374 --rc genhtml_legend=1 00:09:40.374 --rc geninfo_all_blocks=1 00:09:40.374 --rc geninfo_unexecuted_blocks=1 00:09:40.374 00:09:40.374 ' 00:09:40.374 19:27:59 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.374 --rc genhtml_branch_coverage=1 00:09:40.374 --rc genhtml_function_coverage=1 00:09:40.374 --rc genhtml_legend=1 00:09:40.374 --rc geninfo_all_blocks=1 00:09:40.374 --rc geninfo_unexecuted_blocks=1 00:09:40.374 00:09:40.374 ' 00:09:40.374 19:27:59 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.374 --rc genhtml_branch_coverage=1 00:09:40.374 --rc genhtml_function_coverage=1 00:09:40.374 --rc genhtml_legend=1 00:09:40.374 --rc geninfo_all_blocks=1 00:09:40.374 --rc geninfo_unexecuted_blocks=1 00:09:40.374 00:09:40.374 ' 00:09:40.374 19:27:59 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.374 --rc genhtml_branch_coverage=1 00:09:40.374 --rc genhtml_function_coverage=1 00:09:40.374 --rc genhtml_legend=1 00:09:40.374 --rc geninfo_all_blocks=1 00:09:40.374 --rc geninfo_unexecuted_blocks=1 00:09:40.374 00:09:40.374 ' 00:09:40.374 19:27:59 version -- app/version.sh@17 -- # get_header_version major 00:09:40.374 19:27:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:40.374 19:27:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:40.374 19:27:59 version -- app/version.sh@14 -- # cut -f2 00:09:40.374 19:27:59 version -- app/version.sh@17 -- # major=25 00:09:40.374 19:27:59 version -- app/version.sh@18 -- # get_header_version minor 00:09:40.374 19:27:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:40.374 19:27:59 version -- app/version.sh@14 -- # cut -f2 00:09:40.374 19:27:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:40.374 19:27:59 version -- app/version.sh@18 -- # minor=1 00:09:40.374 19:27:59 version -- app/version.sh@19 -- # get_header_version patch 00:09:40.374 19:27:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:40.374 19:27:59 version -- app/version.sh@14 -- # cut -f2 00:09:40.374 19:27:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:40.374 19:27:59 version -- app/version.sh@19 -- # patch=0 00:09:40.374 19:27:59 version -- app/version.sh@20 -- # get_header_version suffix 00:09:40.374 19:27:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:09:40.374 19:27:59 version -- app/version.sh@14 -- # cut -f2 00:09:40.374 19:27:59 version -- app/version.sh@14 -- # tr -d '"' 00:09:40.374 19:27:59 version -- app/version.sh@20 -- # suffix=-pre 00:09:40.374 19:27:59 version -- app/version.sh@22 -- # version=25.1 00:09:40.374 19:27:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:40.374 19:27:59 version -- app/version.sh@28 -- # version=25.1rc0 00:09:40.374 19:27:59 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:09:40.374 19:27:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:40.374 19:27:59 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:40.374 19:27:59 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:40.374 ************************************ 00:09:40.374 END TEST version 00:09:40.374 ************************************ 00:09:40.374 00:09:40.374 real 0m0.193s 00:09:40.374 user 0m0.117s 00:09:40.374 sys 0m0.100s 00:09:40.375 19:27:59 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.375 19:27:59 version -- common/autotest_common.sh@10 -- # set +x 00:09:40.375 19:27:59 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:40.375 19:27:59 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:40.375 19:27:59 -- spdk/autotest.sh@194 -- # uname -s 00:09:40.375 19:27:59 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:40.375 19:27:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:40.375 19:27:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:40.375 19:27:59 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:09:40.375 19:27:59 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:40.375 19:27:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.375 19:27:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.375 19:27:59 -- common/autotest_common.sh@10 -- # set +x 00:09:40.375 ************************************ 00:09:40.375 START TEST blockdev_nvme 00:09:40.375 ************************************ 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:09:40.375 * Looking for test storage... 00:09:40.375 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:40.375 19:27:59 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:40.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.375 --rc genhtml_branch_coverage=1 00:09:40.375 --rc genhtml_function_coverage=1 00:09:40.375 --rc genhtml_legend=1 00:09:40.375 --rc geninfo_all_blocks=1 00:09:40.375 --rc geninfo_unexecuted_blocks=1 00:09:40.375 00:09:40.375 ' 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:40.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.375 --rc genhtml_branch_coverage=1 00:09:40.375 --rc genhtml_function_coverage=1 00:09:40.375 --rc genhtml_legend=1 00:09:40.375 --rc geninfo_all_blocks=1 00:09:40.375 --rc geninfo_unexecuted_blocks=1 00:09:40.375 00:09:40.375 ' 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:40.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.375 --rc genhtml_branch_coverage=1 00:09:40.375 --rc genhtml_function_coverage=1 00:09:40.375 --rc genhtml_legend=1 00:09:40.375 --rc geninfo_all_blocks=1 00:09:40.375 --rc geninfo_unexecuted_blocks=1 00:09:40.375 00:09:40.375 ' 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:40.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:40.375 --rc genhtml_branch_coverage=1 00:09:40.375 --rc genhtml_function_coverage=1 00:09:40.375 --rc genhtml_legend=1 00:09:40.375 --rc geninfo_all_blocks=1 00:09:40.375 --rc geninfo_unexecuted_blocks=1 00:09:40.375 00:09:40.375 ' 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:09:40.375 19:27:59 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60074 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:40.375 19:27:59 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60074 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60074 ']' 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.375 19:27:59 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.636 [2024-12-05 19:27:59.419249] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:40.636 [2024-12-05 19:27:59.419800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60074 ] 00:09:40.636 [2024-12-05 19:27:59.573189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.896 [2024-12-05 19:27:59.673967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.466 19:28:00 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:41.466 19:28:00 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:09:41.466 19:28:00 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:09:41.466 19:28:00 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:09:41.466 19:28:00 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:09:41.466 19:28:00 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:09:41.466 19:28:00 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:41.466 19:28:00 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:09:41.466 19:28:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.466 19:28:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:09:41.730 19:28:00 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:09:41.730 19:28:00 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "635585f6-3672-4245-9747-abdaecf16cdf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "635585f6-3672-4245-9747-abdaecf16cdf",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6b48d809-c766-406f-92c6-02bb68c03258"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6b48d809-c766-406f-92c6-02bb68c03258",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "64ed5dfc-bcbd-45e8-933c-d953ef935148"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "64ed5dfc-bcbd-45e8-933c-d953ef935148",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "807ddd72-41aa-492b-af97-3558e133f040"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "807ddd72-41aa-492b-af97-3558e133f040",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4746fd5e-0146-49ca-8b83-5d28ac270d6f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4746fd5e-0146-49ca-8b83-5d28ac270d6f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1afebd01-2b73-4337-b566-3df10640b742"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1afebd01-2b73-4337-b566-3df10640b742",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:09:41.992 19:28:00 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:09:41.992 19:28:00 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:09:41.992 19:28:00 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:09:41.992 19:28:00 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60074 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60074 ']' 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60074 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60074 00:09:41.992 killing process with pid 60074 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60074' 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60074 00:09:41.992 19:28:00 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60074 00:09:43.378 19:28:02 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:43.378 19:28:02 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:43.378 19:28:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:43.378 19:28:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.378 19:28:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:43.378 ************************************ 00:09:43.378 START TEST bdev_hello_world 00:09:43.378 ************************************ 00:09:43.378 19:28:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:43.378 [2024-12-05 19:28:02.319106] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:43.378 [2024-12-05 19:28:02.319259] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60152 ] 00:09:43.638 [2024-12-05 19:28:02.472692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.638 [2024-12-05 19:28:02.574421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.212 [2024-12-05 19:28:03.110477] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:44.212 [2024-12-05 19:28:03.110528] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:44.212 [2024-12-05 19:28:03.110553] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:44.212 [2024-12-05 19:28:03.113208] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:44.212 [2024-12-05 19:28:03.113691] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:44.212 [2024-12-05 19:28:03.113721] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:44.212 [2024-12-05 19:28:03.113860] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:44.212 00:09:44.212 [2024-12-05 19:28:03.113893] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:45.154 00:09:45.154 real 0m1.582s 00:09:45.154 user 0m1.301s 00:09:45.154 sys 0m0.173s 00:09:45.154 19:28:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.154 19:28:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:45.154 ************************************ 00:09:45.154 END TEST bdev_hello_world 00:09:45.154 ************************************ 00:09:45.154 19:28:03 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:09:45.154 19:28:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:45.154 19:28:03 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.154 19:28:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:45.154 ************************************ 00:09:45.154 START TEST bdev_bounds 00:09:45.154 ************************************ 00:09:45.154 Process bdevio pid: 60194 00:09:45.154 19:28:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:45.154 19:28:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60194 00:09:45.154 19:28:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:45.154 19:28:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60194' 00:09:45.154 19:28:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60194 00:09:45.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.154 19:28:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 60194 ']' 00:09:45.154 19:28:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.154 19:28:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.155 19:28:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:45.155 19:28:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.155 19:28:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.155 19:28:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:45.155 [2024-12-05 19:28:03.939769] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:45.155 [2024-12-05 19:28:03.940045] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60194 ] 00:09:45.155 [2024-12-05 19:28:04.098538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:45.414 [2024-12-05 19:28:04.201438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.414 [2024-12-05 19:28:04.201864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.414 [2024-12-05 19:28:04.201968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.003 19:28:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.003 19:28:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:46.003 19:28:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:46.003 I/O targets: 00:09:46.003 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:46.003 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:46.003 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:46.003 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:46.003 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:46.003 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:46.003 00:09:46.003 00:09:46.003 CUnit - A unit testing framework for C - Version 2.1-3 00:09:46.003 http://cunit.sourceforge.net/ 00:09:46.003 00:09:46.003 00:09:46.003 Suite: bdevio tests on: Nvme3n1 00:09:46.003 Test: blockdev write read block ...passed 00:09:46.003 Test: blockdev write zeroes read block ...passed 00:09:46.003 Test: blockdev write zeroes read no split ...passed 00:09:46.003 Test: blockdev write zeroes read split ...passed 00:09:46.003 Test: blockdev write zeroes read split partial ...passed 00:09:46.003 Test: blockdev reset ...[2024-12-05 19:28:04.999457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:46.003 passed 00:09:46.003 Test: blockdev write read 8 blocks ...[2024-12-05 19:28:05.003008] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:46.003 passed 00:09:46.003 Test: blockdev write read size > 128k ...passed 00:09:46.004 Test: blockdev write read invalid size ...passed 00:09:46.004 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:46.004 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:46.004 Test: blockdev write read max offset ...passed 00:09:46.004 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:46.004 Test: blockdev writev readv 8 blocks ...passed 00:09:46.004 Test: blockdev writev readv 30 x 1block ...passed 00:09:46.004 Test: blockdev writev readv block ...passed 00:09:46.004 Test: blockdev writev readv size > 128k ...passed 00:09:46.004 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:46.282 Test: blockdev comparev and writev ...[2024-12-05 19:28:05.008841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:09:46.282 Test: blockdev nvme passthru rw ...passed 00:09:46.282 Test: blockdev nvme passthru vendor specific ...passed 00:09:46.282 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2ae60a000 len:0x1000 00:09:46.283 [2024-12-05 19:28:05.008977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:46.283 [2024-12-05 19:28:05.009472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:46.283 [2024-12-05 19:28:05.009502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:46.283 passed 00:09:46.283 Test: blockdev copy ...passed 00:09:46.283 Suite: bdevio tests on: Nvme2n3 00:09:46.283 Test: blockdev write read block ...passed 00:09:46.283 Test: blockdev write zeroes read block ...passed 00:09:46.283 Test: blockdev write zeroes read no split ...passed 00:09:46.283 Test: blockdev write zeroes read split ...passed 00:09:46.283 Test: blockdev write zeroes read split partial ...passed 00:09:46.283 Test: blockdev reset ...[2024-12-05 19:28:05.062603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:46.283 [2024-12-05 19:28:05.065717] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:46.283 passed 00:09:46.283 Test: blockdev write read 8 blocks ...passed 00:09:46.283 Test: blockdev write read size > 128k ...passed 00:09:46.283 Test: blockdev write read invalid size ...passed 00:09:46.283 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:46.283 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:46.283 Test: blockdev write read max offset ...passed 00:09:46.283 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:46.283 Test: blockdev writev readv 8 blocks ...passed 00:09:46.283 Test: blockdev writev readv 30 x 1block ...passed 00:09:46.283 Test: blockdev writev readv block ...passed 00:09:46.283 Test: blockdev writev readv size > 128k ...passed 00:09:46.283 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:46.283 Test: blockdev comparev and writev ...[2024-12-05 19:28:05.071954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 passed 00:09:46.283 Test: blockdev nvme passthru rw ...passed 00:09:46.284 Test: blockdev nvme passthru vendor specific ...passed 00:09:46.284 Test: blockdev nvme admin passthru ...SGL DATA BLOCK ADDRESS 0x2b2a06000 len:0x1000 00:09:46.284 [2024-12-05 19:28:05.072074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:46.284 [2024-12-05 19:28:05.072543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:46.284 [2024-12-05 19:28:05.072570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:46.284 passed 00:09:46.284 Test: blockdev copy ...passed 00:09:46.284 Suite: bdevio tests on: Nvme2n2 00:09:46.284 Test: blockdev write read block ...passed 00:09:46.284 Test: blockdev write zeroes read block ...passed 00:09:46.284 Test: blockdev write zeroes read no split ...passed 00:09:46.284 Test: blockdev write zeroes read split ...passed 00:09:46.284 Test: blockdev write zeroes read split partial ...passed 00:09:46.284 Test: blockdev reset ...[2024-12-05 19:28:05.113455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:46.284 [2024-12-05 19:28:05.116431] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:46.284 Test: blockdev write read 8 blocks ...passed 00:09:46.284 Test: blockdev write read size > 128k ...uccessful. 00:09:46.284 passed 00:09:46.284 Test: blockdev write read invalid size ...passed 00:09:46.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:46.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:46.284 Test: blockdev write read max offset ...passed 00:09:46.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:46.284 Test: blockdev writev readv 8 blocks ...passed 00:09:46.284 Test: blockdev writev readv 30 x 1block ...passed 00:09:46.284 Test: blockdev writev readv block ...passed 00:09:46.284 Test: blockdev writev readv size > 128k ...passed 00:09:46.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:46.284 Test: blockdev comparev and writev ...[2024-12-05 19:28:05.122599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:09:46.284 Test: blockdev nvme passthru rw ...passed 00:09:46.285 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2bf83c000 len:0x1000 00:09:46.285 [2024-12-05 19:28:05.122724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:46.285 passed 00:09:46.285 Test: blockdev nvme admin passthru ...[2024-12-05 19:28:05.123347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:46.285 [2024-12-05 19:28:05.123376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:46.285 passed 00:09:46.285 Test: blockdev copy ...passed 00:09:46.285 Suite: bdevio tests on: Nvme2n1 00:09:46.285 Test: blockdev write read block ...passed 00:09:46.285 Test: blockdev write zeroes read block ...passed 00:09:46.285 Test: blockdev write zeroes read no split ...passed 00:09:46.285 Test: blockdev write zeroes read split ...passed 00:09:46.285 Test: blockdev write zeroes read split partial ...passed 00:09:46.285 Test: blockdev reset ...[2024-12-05 19:28:05.164736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:46.285 [2024-12-05 19:28:05.167748] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:46.285 Test: blockdev write read 8 blocks ...passed 00:09:46.285 Test: blockdev write read size > 128k ...uccessful. 00:09:46.285 passed 00:09:46.288 Test: blockdev write read invalid size ...passed 00:09:46.288 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:46.288 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:46.288 Test: blockdev write read max offset ...passed 00:09:46.288 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:46.288 Test: blockdev writev readv 8 blocks ...passed 00:09:46.288 Test: blockdev writev readv 30 x 1block ...passed 00:09:46.288 Test: blockdev writev readv block ...passed 00:09:46.288 Test: blockdev writev readv size > 128k ...passed 00:09:46.288 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:46.288 Test: blockdev comparev and writev ...[2024-12-05 19:28:05.175795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf838000 len:0x1000 00:09:46.288 [2024-12-05 19:28:05.175842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:46.288 passed 00:09:46.288 Test: blockdev nvme passthru rw ...passed 00:09:46.288 Test: blockdev nvme passthru vendor specific ...passed 00:09:46.288 Test: blockdev nvme admin passthru ...[2024-12-05 19:28:05.176785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:46.288 [2024-12-05 19:28:05.176910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:46.288 passed 00:09:46.288 Test: blockdev copy ...passed 00:09:46.288 Suite: bdevio tests on: Nvme1n1 00:09:46.288 Test: blockdev write read block ...passed 00:09:46.288 Test: blockdev write zeroes read block ...passed 00:09:46.288 Test: blockdev write zeroes read no split ...passed 00:09:46.288 Test: blockdev write zeroes read split ...passed 00:09:46.288 Test: blockdev write zeroes read split partial ...passed 00:09:46.288 Test: blockdev reset ...[2024-12-05 19:28:05.230590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:46.289 [2024-12-05 19:28:05.233116] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:09:46.289 00:09:46.289 Test: blockdev write read 8 blocks ...passed 00:09:46.289 Test: blockdev write read size > 128k ...passed 00:09:46.289 Test: blockdev write read invalid size ...passed 00:09:46.289 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:46.289 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:46.289 Test: blockdev write read max offset ...passed 00:09:46.289 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:46.289 Test: blockdev writev readv 8 blocks ...passed 00:09:46.289 Test: blockdev writev readv 30 x 1block ...passed 00:09:46.289 Test: blockdev writev readv block ...passed 00:09:46.289 Test: blockdev writev readv size > 128k ...passed 00:09:46.289 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:46.289 Test: blockdev comparev and writev ...[2024-12-05 19:28:05.238843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bf834000 len:0x1000 00:09:46.289 [2024-12-05 19:28:05.238888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:46.289 passed 00:09:46.289 Test: blockdev nvme passthru rw ...passed 00:09:46.289 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:28:05.239352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:46.289 [2024-12-05 19:28:05.239376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:46.289 passed 00:09:46.289 Test: blockdev nvme admin passthru ...passed 00:09:46.289 Test: blockdev copy ...passed 00:09:46.289 Suite: bdevio tests on: Nvme0n1 00:09:46.289 Test: blockdev write read block ...passed 00:09:46.289 Test: blockdev write zeroes read block ...passed 00:09:46.289 Test: blockdev write zeroes read no split ...passed 00:09:46.289 Test: blockdev write zeroes read split ...passed 00:09:46.549 Test: blockdev write zeroes read split partial ...passed 00:09:46.549 Test: blockdev reset ...[2024-12-05 19:28:05.283965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:46.549 [2024-12-05 19:28:05.286621] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:09:46.549 00:09:46.549 Test: blockdev write read 8 blocks ...passed 00:09:46.549 Test: blockdev write read size > 128k ...passed 00:09:46.549 Test: blockdev write read invalid size ...passed 00:09:46.549 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:46.549 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:46.549 Test: blockdev write read max offset ...passed 00:09:46.549 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:46.549 Test: blockdev writev readv 8 blocks ...passed 00:09:46.549 Test: blockdev writev readv 30 x 1block ...passed 00:09:46.549 Test: blockdev writev readv block ...passed 00:09:46.549 Test: blockdev writev readv size > 128k ...passed 00:09:46.549 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:46.549 Test: blockdev comparev and writev ...passed 00:09:46.549 Test: blockdev nvme passthru rw ...[2024-12-05 19:28:05.292072] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:46.549 separate metadata which is not supported yet. 00:09:46.549 passed 00:09:46.549 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:28:05.292497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:46.549 [2024-12-05 19:28:05.292754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:09:46.549 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:09:46.549 passed 00:09:46.549 Test: blockdev copy ...passed 00:09:46.549 00:09:46.549 Run Summary: Type Total Ran Passed Failed Inactive 00:09:46.549 suites 6 6 n/a 0 0 00:09:46.549 tests 138 138 138 0 0 00:09:46.549 asserts 893 893 893 0 n/a 00:09:46.549 00:09:46.549 Elapsed time = 0.916 seconds 00:09:46.549 0 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60194 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 60194 ']' 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 60194 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60194 00:09:46.549 killing process with pid 60194 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60194' 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 60194 00:09:46.549 19:28:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 60194 00:09:47.120 19:28:06 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:47.120 00:09:47.120 real 0m2.128s 00:09:47.120 user 0m5.578s 00:09:47.120 sys 0m0.280s 00:09:47.120 19:28:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.120 ************************************ 00:09:47.120 END TEST bdev_bounds 00:09:47.120 ************************************ 00:09:47.120 19:28:06 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:47.120 19:28:06 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:47.120 19:28:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:47.120 19:28:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.120 19:28:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.120 ************************************ 00:09:47.120 START TEST bdev_nbd 00:09:47.120 ************************************ 00:09:47.120 19:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:47.120 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:47.120 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:47.120 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.120 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:47.120 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:47.120 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:47.120 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:09:47.120 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:47.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60248 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60248 /var/tmp/spdk-nbd.sock 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 60248 ']' 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.121 19:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:47.121 [2024-12-05 19:28:06.108369] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:47.121 [2024-12-05 19:28:06.108487] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.380 [2024-12-05 19:28:06.262795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.380 [2024-12-05 19:28:06.363453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:47.950 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:47.951 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:47.951 19:28:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.212 1+0 records in 00:09:48.212 1+0 records out 00:09:48.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000481598 s, 8.5 MB/s 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:48.212 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.473 1+0 records in 00:09:48.473 1+0 records out 00:09:48.473 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00037336 s, 11.0 MB/s 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:48.473 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.735 1+0 records in 00:09:48.735 1+0 records out 00:09:48.735 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039638 s, 10.3 MB/s 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:48.735 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:48.995 1+0 records in 00:09:48.995 1+0 records out 00:09:48.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00330218 s, 1.2 MB/s 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:48.995 19:28:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.255 1+0 records in 00:09:49.255 1+0 records out 00:09:49.255 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411763 s, 9.9 MB/s 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:49.255 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:49.541 1+0 records in 00:09:49.541 1+0 records out 00:09:49.541 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384068 s, 10.7 MB/s 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:09:49.541 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd0", 00:09:49.802 "bdev_name": "Nvme0n1" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd1", 00:09:49.802 "bdev_name": "Nvme1n1" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd2", 00:09:49.802 "bdev_name": "Nvme2n1" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd3", 00:09:49.802 "bdev_name": "Nvme2n2" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd4", 00:09:49.802 "bdev_name": "Nvme2n3" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd5", 00:09:49.802 "bdev_name": "Nvme3n1" 00:09:49.802 } 00:09:49.802 ]' 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd0", 00:09:49.802 "bdev_name": "Nvme0n1" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd1", 00:09:49.802 "bdev_name": "Nvme1n1" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd2", 00:09:49.802 "bdev_name": "Nvme2n1" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd3", 00:09:49.802 "bdev_name": "Nvme2n2" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd4", 00:09:49.802 "bdev_name": "Nvme2n3" 00:09:49.802 }, 00:09:49.802 { 00:09:49.802 "nbd_device": "/dev/nbd5", 00:09:49.802 "bdev_name": "Nvme3n1" 00:09:49.802 } 00:09:49.802 ]' 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:49.802 19:28:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.064 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.417 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:50.707 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.969 19:28:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:51.231 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:51.493 /dev/nbd0 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.493 1+0 records in 00:09:51.493 1+0 records out 00:09:51.493 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527721 s, 7.8 MB/s 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:51.493 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:09:51.755 /dev/nbd1 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:51.755 1+0 records in 00:09:51.755 1+0 records out 00:09:51.755 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038586 s, 10.6 MB/s 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:51.755 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:09:52.015 /dev/nbd10 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.015 1+0 records in 00:09:52.015 1+0 records out 00:09:52.015 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341543 s, 12.0 MB/s 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:52.015 19:28:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:09:52.275 /dev/nbd11 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.275 1+0 records in 00:09:52.275 1+0 records out 00:09:52.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341838 s, 12.0 MB/s 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:52.275 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:09:52.275 /dev/nbd12 00:09:52.559 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:52.559 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:52.559 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:52.559 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.560 1+0 records in 00:09:52.560 1+0 records out 00:09:52.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000460634 s, 8.9 MB/s 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:09:52.560 /dev/nbd13 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:52.560 1+0 records in 00:09:52.560 1+0 records out 00:09:52.560 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544751 s, 7.5 MB/s 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:52.560 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd0", 00:09:52.818 "bdev_name": "Nvme0n1" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd1", 00:09:52.818 "bdev_name": "Nvme1n1" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd10", 00:09:52.818 "bdev_name": "Nvme2n1" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd11", 00:09:52.818 "bdev_name": "Nvme2n2" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd12", 00:09:52.818 "bdev_name": "Nvme2n3" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd13", 00:09:52.818 "bdev_name": "Nvme3n1" 00:09:52.818 } 00:09:52.818 ]' 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd0", 00:09:52.818 "bdev_name": "Nvme0n1" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd1", 00:09:52.818 "bdev_name": "Nvme1n1" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd10", 00:09:52.818 "bdev_name": "Nvme2n1" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd11", 00:09:52.818 "bdev_name": "Nvme2n2" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd12", 00:09:52.818 "bdev_name": "Nvme2n3" 00:09:52.818 }, 00:09:52.818 { 00:09:52.818 "nbd_device": "/dev/nbd13", 00:09:52.818 "bdev_name": "Nvme3n1" 00:09:52.818 } 00:09:52.818 ]' 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:52.818 /dev/nbd1 00:09:52.818 /dev/nbd10 00:09:52.818 /dev/nbd11 00:09:52.818 /dev/nbd12 00:09:52.818 /dev/nbd13' 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:52.818 /dev/nbd1 00:09:52.818 /dev/nbd10 00:09:52.818 /dev/nbd11 00:09:52.818 /dev/nbd12 00:09:52.818 /dev/nbd13' 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:52.818 256+0 records in 00:09:52.818 256+0 records out 00:09:52.818 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00494767 s, 212 MB/s 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:52.818 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:53.078 256+0 records in 00:09:53.078 256+0 records out 00:09:53.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0633739 s, 16.5 MB/s 00:09:53.078 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.078 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:53.078 256+0 records in 00:09:53.078 256+0 records out 00:09:53.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.064533 s, 16.2 MB/s 00:09:53.078 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.078 19:28:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:53.078 256+0 records in 00:09:53.078 256+0 records out 00:09:53.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0656408 s, 16.0 MB/s 00:09:53.078 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.078 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:53.339 256+0 records in 00:09:53.339 256+0 records out 00:09:53.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0728776 s, 14.4 MB/s 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:53.339 256+0 records in 00:09:53.339 256+0 records out 00:09:53.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0558038 s, 18.8 MB/s 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:53.339 256+0 records in 00:09:53.339 256+0 records out 00:09:53.339 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.054854 s, 19.1 MB/s 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.339 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.601 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:53.862 19:28:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:54.122 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:54.123 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:54.123 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:54.123 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.123 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.123 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:54.123 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:54.123 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.123 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.123 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:54.384 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.645 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:54.906 malloc_lvol_verify 00:09:54.906 19:28:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:55.166 fdc0e53b-3578-46a0-9c36-3c0c1021e23b 00:09:55.166 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:55.427 0a64d37c-57a6-4db8-9fb8-87b3d2049f05 00:09:55.427 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:55.686 /dev/nbd0 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:55.686 mke2fs 1.47.0 (5-Feb-2023) 00:09:55.686 Discarding device blocks: 0/4096 done 00:09:55.686 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:55.686 00:09:55.686 Allocating group tables: 0/1 done 00:09:55.686 Writing inode tables: 0/1 done 00:09:55.686 Creating journal (1024 blocks): done 00:09:55.686 Writing superblocks and filesystem accounting information: 0/1 done 00:09:55.686 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:55.686 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60248 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 60248 ']' 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 60248 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60248 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.946 killing process with pid 60248 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60248' 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 60248 00:09:55.946 19:28:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 60248 00:09:56.515 19:28:15 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:56.515 00:09:56.515 real 0m9.340s 00:09:56.515 user 0m13.538s 00:09:56.515 sys 0m2.942s 00:09:56.515 19:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.515 ************************************ 00:09:56.515 END TEST bdev_nbd 00:09:56.515 ************************************ 00:09:56.515 19:28:15 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:56.515 19:28:15 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:56.515 19:28:15 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:09:56.515 skipping fio tests on NVMe due to multi-ns failures. 00:09:56.515 19:28:15 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:56.515 19:28:15 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:56.515 19:28:15 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:56.515 19:28:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:56.515 19:28:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.515 19:28:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.515 ************************************ 00:09:56.515 START TEST bdev_verify 00:09:56.515 ************************************ 00:09:56.515 19:28:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:56.515 [2024-12-05 19:28:15.480590] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:09:56.515 [2024-12-05 19:28:15.480706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60623 ] 00:09:56.774 [2024-12-05 19:28:15.637046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.774 [2024-12-05 19:28:15.720061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.774 [2024-12-05 19:28:15.720063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.345 Running I/O for 5 seconds... 00:09:59.660 22080.00 IOPS, 86.25 MiB/s [2024-12-05T19:28:19.620Z] 23552.00 IOPS, 92.00 MiB/s [2024-12-05T19:28:20.608Z] 23850.67 IOPS, 93.17 MiB/s [2024-12-05T19:28:21.543Z] 24064.00 IOPS, 94.00 MiB/s [2024-12-05T19:28:21.543Z] 24524.80 IOPS, 95.80 MiB/s 00:10:02.537 Latency(us) 00:10:02.537 [2024-12-05T19:28:21.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.537 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.537 Verification LBA range: start 0x0 length 0xbd0bd 00:10:02.537 Nvme0n1 : 5.04 2006.28 7.84 0.00 0.00 63613.36 13913.80 67754.14 00:10:02.537 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.537 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:02.537 Nvme0n1 : 5.05 2027.95 7.92 0.00 0.00 62937.39 12905.55 69770.63 00:10:02.537 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.537 Verification LBA range: start 0x0 length 0xa0000 00:10:02.537 Nvme1n1 : 5.04 2005.62 7.83 0.00 0.00 63507.97 13812.97 62107.96 00:10:02.537 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.537 Verification LBA range: start 0xa0000 length 0xa0000 00:10:02.537 Nvme1n1 : 5.05 2027.40 7.92 0.00 0.00 62824.08 13107.20 60494.77 00:10:02.537 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.537 Verification LBA range: start 0x0 length 0x80000 00:10:02.537 Nvme2n1 : 5.04 2005.08 7.83 0.00 0.00 63408.89 13409.67 59284.87 00:10:02.537 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.537 Verification LBA range: start 0x80000 length 0x80000 00:10:02.538 Nvme2n1 : 5.05 2026.86 7.92 0.00 0.00 62705.66 12149.37 54848.59 00:10:02.538 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.538 Verification LBA range: start 0x0 length 0x80000 00:10:02.538 Nvme2n2 : 5.06 2012.09 7.86 0.00 0.00 63072.10 3579.27 60494.77 00:10:02.538 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.538 Verification LBA range: start 0x80000 length 0x80000 00:10:02.538 Nvme2n2 : 5.05 2026.29 7.92 0.00 0.00 62600.94 11796.48 53638.70 00:10:02.538 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.538 Verification LBA range: start 0x0 length 0x80000 00:10:02.538 Nvme2n3 : 5.07 2020.36 7.89 0.00 0.00 62747.05 8620.50 60091.47 00:10:02.538 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.538 Verification LBA range: start 0x80000 length 0x80000 00:10:02.538 Nvme2n3 : 5.06 2034.73 7.95 0.00 0.00 62217.43 3226.39 56058.49 00:10:02.538 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:02.538 Verification LBA range: start 0x0 length 0x20000 00:10:02.538 Nvme3n1 : 5.07 2019.84 7.89 0.00 0.00 62640.39 8015.56 64527.75 00:10:02.538 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:02.538 Verification LBA range: start 0x20000 length 0x20000 00:10:02.538 Nvme3n1 : 5.07 2043.85 7.98 0.00 0.00 61882.47 6251.13 62107.96 00:10:02.538 [2024-12-05T19:28:21.544Z] =================================================================================================================== 00:10:02.538 [2024-12-05T19:28:21.544Z] Total : 24256.37 94.75 0.00 0.00 62842.95 3226.39 69770.63 00:10:03.911 00:10:03.911 real 0m7.183s 00:10:03.911 user 0m13.494s 00:10:03.911 sys 0m0.201s 00:10:03.911 19:28:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.911 19:28:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:03.911 ************************************ 00:10:03.911 END TEST bdev_verify 00:10:03.911 ************************************ 00:10:03.911 19:28:22 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:03.911 19:28:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:03.911 19:28:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.911 19:28:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.911 ************************************ 00:10:03.911 START TEST bdev_verify_big_io 00:10:03.911 ************************************ 00:10:03.911 19:28:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:03.911 [2024-12-05 19:28:22.709002] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:03.911 [2024-12-05 19:28:22.709139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60716 ] 00:10:03.911 [2024-12-05 19:28:22.869256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:04.169 [2024-12-05 19:28:22.970335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:04.169 [2024-12-05 19:28:22.970544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.736 Running I/O for 5 seconds... 00:10:08.948 1072.00 IOPS, 67.00 MiB/s [2024-12-05T19:28:29.327Z] 1808.50 IOPS, 113.03 MiB/s [2024-12-05T19:28:29.892Z] 1969.67 IOPS, 123.10 MiB/s [2024-12-05T19:28:29.892Z] 2079.50 IOPS, 129.97 MiB/s 00:10:10.886 Latency(us) 00:10:10.886 [2024-12-05T19:28:29.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:10.886 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x0 length 0xbd0b 00:10:10.886 Nvme0n1 : 5.59 125.92 7.87 0.00 0.00 981724.48 16535.24 1206669.00 00:10:10.886 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:10.886 Nvme0n1 : 5.65 118.67 7.42 0.00 0.00 1042203.70 9175.04 1690627.15 00:10:10.886 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x0 length 0xa000 00:10:10.886 Nvme1n1 : 5.69 131.65 8.23 0.00 0.00 907657.24 33473.77 1000180.18 00:10:10.886 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0xa000 length 0xa000 00:10:10.886 Nvme1n1 : 5.65 123.35 7.71 0.00 0.00 966924.99 34280.37 1522854.99 00:10:10.886 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x0 length 0x8000 00:10:10.886 Nvme2n1 : 5.69 134.86 8.43 0.00 0.00 857828.96 62107.96 909841.33 00:10:10.886 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x8000 length 0x8000 00:10:10.886 Nvme2n1 : 5.71 120.95 7.56 0.00 0.00 939297.29 59688.17 1780966.01 00:10:10.886 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x0 length 0x8000 00:10:10.886 Nvme2n2 : 5.77 143.62 8.98 0.00 0.00 777100.80 26416.05 903388.55 00:10:10.886 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x8000 length 0x8000 00:10:10.886 Nvme2n2 : 5.88 134.75 8.42 0.00 0.00 807678.40 28230.89 1806777.11 00:10:10.886 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x0 length 0x8000 00:10:10.886 Nvme2n3 : 6.00 159.28 9.95 0.00 0.00 676292.81 34885.32 974369.08 00:10:10.886 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x8000 length 0x8000 00:10:10.886 Nvme2n3 : 6.05 152.31 9.52 0.00 0.00 698896.44 30852.33 1832588.21 00:10:10.886 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x0 length 0x2000 00:10:10.886 Nvme3n1 : 6.03 173.22 10.83 0.00 0.00 602947.91 1764.43 1084066.26 00:10:10.886 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:10.886 Verification LBA range: start 0x2000 length 0x2000 00:10:10.886 Nvme3n1 : 6.11 191.23 11.95 0.00 0.00 539243.61 285.14 1664816.05 00:10:10.886 [2024-12-05T19:28:29.892Z] =================================================================================================================== 00:10:10.886 [2024-12-05T19:28:29.892Z] Total : 1709.81 106.86 0.00 0.00 789266.48 285.14 1832588.21 00:10:12.782 00:10:12.782 real 0m8.967s 00:10:12.782 user 0m17.007s 00:10:12.782 sys 0m0.231s 00:10:12.782 19:28:31 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.782 19:28:31 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:10:12.782 ************************************ 00:10:12.782 END TEST bdev_verify_big_io 00:10:12.782 ************************************ 00:10:12.782 19:28:31 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:12.782 19:28:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:12.782 19:28:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.782 19:28:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:12.782 ************************************ 00:10:12.782 START TEST bdev_write_zeroes 00:10:12.782 ************************************ 00:10:12.782 19:28:31 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:12.782 [2024-12-05 19:28:31.713412] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:12.782 [2024-12-05 19:28:31.713526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60825 ] 00:10:13.038 [2024-12-05 19:28:31.873041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.038 [2024-12-05 19:28:31.973065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.602 Running I/O for 1 seconds... 00:10:14.591 72960.00 IOPS, 285.00 MiB/s 00:10:14.591 Latency(us) 00:10:14.591 [2024-12-05T19:28:33.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.591 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.591 Nvme0n1 : 1.02 12075.59 47.17 0.00 0.00 10577.53 8922.98 20669.05 00:10:14.591 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.591 Nvme1n1 : 1.02 12061.96 47.12 0.00 0.00 10576.70 9124.63 20265.75 00:10:14.591 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.591 Nvme2n1 : 1.03 12048.20 47.06 0.00 0.00 10567.53 8973.39 19459.15 00:10:14.591 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.591 Nvme2n2 : 1.03 12034.61 47.01 0.00 0.00 10563.71 9023.80 18955.03 00:10:14.591 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.591 Nvme2n3 : 1.03 12021.08 46.96 0.00 0.00 10538.78 6452.78 19055.85 00:10:14.591 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:10:14.591 Nvme3n1 : 1.03 12007.59 46.90 0.00 0.00 10530.32 6024.27 20568.22 00:10:14.591 [2024-12-05T19:28:33.597Z] =================================================================================================================== 00:10:14.591 [2024-12-05T19:28:33.597Z] Total : 72249.03 282.22 0.00 0.00 10559.09 6024.27 20669.05 00:10:15.525 00:10:15.525 real 0m2.657s 00:10:15.525 user 0m2.361s 00:10:15.525 sys 0m0.181s 00:10:15.525 19:28:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.525 ************************************ 00:10:15.525 END TEST bdev_write_zeroes 00:10:15.525 ************************************ 00:10:15.525 19:28:34 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:10:15.525 19:28:34 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:15.525 19:28:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:15.525 19:28:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.525 19:28:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:15.525 ************************************ 00:10:15.525 START TEST bdev_json_nonenclosed 00:10:15.525 ************************************ 00:10:15.525 19:28:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:15.525 [2024-12-05 19:28:34.405009] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:15.525 [2024-12-05 19:28:34.405143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60879 ] 00:10:15.782 [2024-12-05 19:28:34.565242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.783 [2024-12-05 19:28:34.663436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.783 [2024-12-05 19:28:34.663521] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:10:15.783 [2024-12-05 19:28:34.663539] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:15.783 [2024-12-05 19:28:34.663548] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:16.040 00:10:16.040 real 0m0.495s 00:10:16.040 user 0m0.308s 00:10:16.040 sys 0m0.083s 00:10:16.040 19:28:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.040 19:28:34 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:10:16.040 ************************************ 00:10:16.040 END TEST bdev_json_nonenclosed 00:10:16.040 ************************************ 00:10:16.040 19:28:34 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:16.040 19:28:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:10:16.040 19:28:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.040 19:28:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.040 ************************************ 00:10:16.040 START TEST bdev_json_nonarray 00:10:16.040 ************************************ 00:10:16.040 19:28:34 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:10:16.041 [2024-12-05 19:28:34.949142] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:16.041 [2024-12-05 19:28:34.949261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60899 ] 00:10:16.296 [2024-12-05 19:28:35.111829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.296 [2024-12-05 19:28:35.216868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.296 [2024-12-05 19:28:35.216957] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:10:16.296 [2024-12-05 19:28:35.216974] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:16.296 [2024-12-05 19:28:35.216984] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:16.553 00:10:16.553 real 0m0.522s 00:10:16.553 user 0m0.319s 00:10:16.553 sys 0m0.099s 00:10:16.553 19:28:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.553 ************************************ 00:10:16.553 END TEST bdev_json_nonarray 00:10:16.553 ************************************ 00:10:16.553 19:28:35 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:10:16.553 19:28:35 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:10:16.553 00:10:16.553 real 0m36.236s 00:10:16.553 user 0m57.061s 00:10:16.553 sys 0m4.893s 00:10:16.553 19:28:35 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.553 19:28:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:10:16.553 ************************************ 00:10:16.553 END TEST blockdev_nvme 00:10:16.553 ************************************ 00:10:16.553 19:28:35 -- spdk/autotest.sh@209 -- # uname -s 00:10:16.553 19:28:35 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:10:16.553 19:28:35 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:16.553 19:28:35 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.553 19:28:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.553 19:28:35 -- common/autotest_common.sh@10 -- # set +x 00:10:16.553 ************************************ 00:10:16.553 START TEST blockdev_nvme_gpt 00:10:16.553 ************************************ 00:10:16.553 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:10:16.553 * Looking for test storage... 00:10:16.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:10:16.553 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.553 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.553 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.813 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.813 19:28:35 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:10:16.813 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.813 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.813 --rc genhtml_branch_coverage=1 00:10:16.813 --rc genhtml_function_coverage=1 00:10:16.813 --rc genhtml_legend=1 00:10:16.813 --rc geninfo_all_blocks=1 00:10:16.813 --rc geninfo_unexecuted_blocks=1 00:10:16.813 00:10:16.813 ' 00:10:16.813 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.813 --rc genhtml_branch_coverage=1 00:10:16.813 --rc genhtml_function_coverage=1 00:10:16.813 --rc genhtml_legend=1 00:10:16.813 --rc geninfo_all_blocks=1 00:10:16.813 --rc geninfo_unexecuted_blocks=1 00:10:16.813 00:10:16.813 ' 00:10:16.813 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.813 --rc genhtml_branch_coverage=1 00:10:16.813 --rc genhtml_function_coverage=1 00:10:16.813 --rc genhtml_legend=1 00:10:16.813 --rc geninfo_all_blocks=1 00:10:16.813 --rc geninfo_unexecuted_blocks=1 00:10:16.813 00:10:16.813 ' 00:10:16.813 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.813 --rc genhtml_branch_coverage=1 00:10:16.813 --rc genhtml_function_coverage=1 00:10:16.813 --rc genhtml_legend=1 00:10:16.813 --rc geninfo_all_blocks=1 00:10:16.813 --rc geninfo_unexecuted_blocks=1 00:10:16.813 00:10:16.813 ' 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:10:16.813 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60983 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:16.814 19:28:35 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60983 00:10:16.814 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 60983 ']' 00:10:16.814 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.814 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.814 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.814 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.814 19:28:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:16.814 [2024-12-05 19:28:35.705908] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:16.814 [2024-12-05 19:28:35.706054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60983 ] 00:10:17.071 [2024-12-05 19:28:35.857784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.071 [2024-12-05 19:28:35.955812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.638 19:28:36 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.638 19:28:36 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:10:17.638 19:28:36 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:10:17.638 19:28:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:10:17.638 19:28:36 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:17.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:18.154 Waiting for block devices as requested 00:10:18.154 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.412 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.412 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:18.412 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:23.672 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:10:23.672 BYT; 00:10:23.672 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:10:23.672 BYT; 00:10:23.672 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:23.672 19:28:42 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:10:23.672 19:28:42 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:10:24.605 The operation has completed successfully. 00:10:24.605 19:28:43 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:10:25.538 The operation has completed successfully. 00:10:25.538 19:28:44 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:26.140 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.397 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:26.397 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:26.397 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:26.654 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:26.654 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:10:26.654 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.654 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:26.654 [] 00:10:26.654 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.654 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:10:26.654 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:10:26.654 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:10:26.654 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:26.654 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:10:26.654 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.654 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.912 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.912 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:10:26.912 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.912 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.912 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.912 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:10:26.912 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:10:26.912 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:26.912 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.171 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:10:27.171 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:10:27.172 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c694419c-19aa-4ab2-aa57-601c505463c0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c694419c-19aa-4ab2-aa57-601c505463c0",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8a5804fa-3281-40f2-a2f7-1116932eb42a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8a5804fa-3281-40f2-a2f7-1116932eb42a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ddee9a37-877c-4a71-a5c3-0ab272d97183"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ddee9a37-877c-4a71-a5c3-0ab272d97183",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4bbc9e3a-95b7-4dbe-b131-631470894c1a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4bbc9e3a-95b7-4dbe-b131-631470894c1a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "fc7bb67e-7774-46b1-b8b6-2ce364c66c14"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "fc7bb67e-7774-46b1-b8b6-2ce364c66c14",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:10:27.172 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:10:27.172 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:10:27.172 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:10:27.172 19:28:45 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 60983 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 60983 ']' 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 60983 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60983 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:27.172 killing process with pid 60983 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60983' 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 60983 00:10:27.172 19:28:45 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 60983 00:10:28.545 19:28:47 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:28.545 19:28:47 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:28.545 19:28:47 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:10:28.545 19:28:47 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.545 19:28:47 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:28.545 ************************************ 00:10:28.545 START TEST bdev_hello_world 00:10:28.545 ************************************ 00:10:28.545 19:28:47 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:10:28.803 [2024-12-05 19:28:47.552667] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:28.803 [2024-12-05 19:28:47.552776] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61602 ] 00:10:28.803 [2024-12-05 19:28:47.716243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.060 [2024-12-05 19:28:47.814640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:29.623 [2024-12-05 19:28:48.351350] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:10:29.623 [2024-12-05 19:28:48.351401] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:10:29.623 [2024-12-05 19:28:48.351422] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:10:29.624 [2024-12-05 19:28:48.353915] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:10:29.624 [2024-12-05 19:28:48.354323] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:10:29.624 [2024-12-05 19:28:48.354352] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:10:29.624 [2024-12-05 19:28:48.354583] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:10:29.624 00:10:29.624 [2024-12-05 19:28:48.354613] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:10:30.232 00:10:30.232 real 0m1.585s 00:10:30.232 user 0m1.304s 00:10:30.232 sys 0m0.171s 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.232 ************************************ 00:10:30.232 END TEST bdev_hello_world 00:10:30.232 ************************************ 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:30.232 19:28:49 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:10:30.232 19:28:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.232 19:28:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.232 19:28:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:30.232 ************************************ 00:10:30.232 START TEST bdev_bounds 00:10:30.232 ************************************ 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61638 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:10:30.232 Process bdevio pid: 61638 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61638' 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61638 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61638 ']' 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:30.232 19:28:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:30.232 [2024-12-05 19:28:49.200241] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:30.232 [2024-12-05 19:28:49.200418] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61638 ] 00:10:30.550 [2024-12-05 19:28:49.372307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:30.550 [2024-12-05 19:28:49.459528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:30.550 [2024-12-05 19:28:49.459769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:30.550 [2024-12-05 19:28:49.459812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.113 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:31.113 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:10:31.113 19:28:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:10:31.371 I/O targets: 00:10:31.371 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:10:31.371 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:10:31.371 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:10:31.371 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:31.371 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:31.371 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:10:31.371 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:10:31.371 00:10:31.371 00:10:31.371 CUnit - A unit testing framework for C - Version 2.1-3 00:10:31.371 http://cunit.sourceforge.net/ 00:10:31.371 00:10:31.371 00:10:31.371 Suite: bdevio tests on: Nvme3n1 00:10:31.371 Test: blockdev write read block ...passed 00:10:31.371 Test: blockdev write zeroes read block ...passed 00:10:31.371 Test: blockdev write zeroes read no split ...passed 00:10:31.371 Test: blockdev write zeroes read split ...passed 00:10:31.371 Test: blockdev write zeroes read split partial ...passed 00:10:31.371 Test: blockdev reset ...[2024-12-05 19:28:50.164514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:10:31.371 [2024-12-05 19:28:50.167367] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:10:31.371 passed 00:10:31.371 Test: blockdev write read 8 blocks ...passed 00:10:31.371 Test: blockdev write read size > 128k ...passed 00:10:31.371 Test: blockdev write read invalid size ...passed 00:10:31.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.371 Test: blockdev write read max offset ...passed 00:10:31.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.371 Test: blockdev writev readv 8 blocks ...passed 00:10:31.371 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.371 Test: blockdev writev readv block ...passed 00:10:31.371 Test: blockdev writev readv size > 128k ...passed 00:10:31.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.371 Test: blockdev comparev and writev ...[2024-12-05 19:28:50.172899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc604000 len:0x1000 00:10:31.371 [2024-12-05 19:28:50.172947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.371 passed 00:10:31.371 Test: blockdev nvme passthru rw ...passed 00:10:31.371 Test: blockdev nvme passthru vendor specific ...passed 00:10:31.371 Test: blockdev nvme admin passthru ...[2024-12-05 19:28:50.173438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:31.371 [2024-12-05 19:28:50.173466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:31.371 passed 00:10:31.371 Test: blockdev copy ...passed 00:10:31.371 Suite: bdevio tests on: Nvme2n3 00:10:31.371 Test: blockdev write read block ...passed 00:10:31.371 Test: blockdev write zeroes read block ...passed 00:10:31.371 Test: blockdev write zeroes read no split ...passed 00:10:31.371 Test: blockdev write zeroes read split ...passed 00:10:31.371 Test: blockdev write zeroes read split partial ...passed 00:10:31.371 Test: blockdev reset ...[2024-12-05 19:28:50.216536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:31.371 [2024-12-05 19:28:50.219764] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:10:31.371 Test: blockdev write read 8 blocks ...uccessful. 00:10:31.371 passed 00:10:31.371 Test: blockdev write read size > 128k ...passed 00:10:31.371 Test: blockdev write read invalid size ...passed 00:10:31.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.371 Test: blockdev write read max offset ...passed 00:10:31.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.371 Test: blockdev writev readv 8 blocks ...passed 00:10:31.371 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.371 Test: blockdev writev readv block ...passed 00:10:31.371 Test: blockdev writev readv size > 128k ...passed 00:10:31.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.371 Test: blockdev comparev and writev ...[2024-12-05 19:28:50.225280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bc602000 len:0x1000 00:10:31.371 passed 00:10:31.371 Test: blockdev nvme passthru rw ...[2024-12-05 19:28:50.225320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.371 passed 00:10:31.371 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:28:50.225754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:31.371 passed 00:10:31.371 Test: blockdev nvme admin passthru ...[2024-12-05 19:28:50.225785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:31.371 passed 00:10:31.371 Test: blockdev copy ...passed 00:10:31.371 Suite: bdevio tests on: Nvme2n2 00:10:31.371 Test: blockdev write read block ...passed 00:10:31.371 Test: blockdev write zeroes read block ...passed 00:10:31.371 Test: blockdev write zeroes read no split ...passed 00:10:31.371 Test: blockdev write zeroes read split ...passed 00:10:31.371 Test: blockdev write zeroes read split partial ...passed 00:10:31.371 Test: blockdev reset ...[2024-12-05 19:28:50.271239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:31.371 [2024-12-05 19:28:50.274353] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:10:31.371 Test: blockdev write read 8 blocks ...uccessful. 00:10:31.371 passed 00:10:31.371 Test: blockdev write read size > 128k ...passed 00:10:31.371 Test: blockdev write read invalid size ...passed 00:10:31.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.371 Test: blockdev write read max offset ...passed 00:10:31.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.371 Test: blockdev writev readv 8 blocks ...passed 00:10:31.371 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.371 Test: blockdev writev readv block ...passed 00:10:31.371 Test: blockdev writev readv size > 128k ...passed 00:10:31.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.371 Test: blockdev comparev and writev ...[2024-12-05 19:28:50.280978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1638000 len:0x1000 00:10:31.371 [2024-12-05 19:28:50.281023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.371 passed 00:10:31.371 Test: blockdev nvme passthru rw ...passed 00:10:31.371 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:28:50.281825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:31.371 [2024-12-05 19:28:50.281851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:31.371 passed 00:10:31.371 Test: blockdev nvme admin passthru ...passed 00:10:31.371 Test: blockdev copy ...passed 00:10:31.371 Suite: bdevio tests on: Nvme2n1 00:10:31.371 Test: blockdev write read block ...passed 00:10:31.371 Test: blockdev write zeroes read block ...passed 00:10:31.371 Test: blockdev write zeroes read no split ...passed 00:10:31.371 Test: blockdev write zeroes read split ...passed 00:10:31.371 Test: blockdev write zeroes read split partial ...passed 00:10:31.371 Test: blockdev reset ...[2024-12-05 19:28:50.333529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:10:31.371 [2024-12-05 19:28:50.336471] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:10:31.371 passed 00:10:31.371 Test: blockdev write read 8 blocks ...passed 00:10:31.371 Test: blockdev write read size > 128k ...passed 00:10:31.371 Test: blockdev write read invalid size ...passed 00:10:31.371 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.371 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.371 Test: blockdev write read max offset ...passed 00:10:31.371 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.371 Test: blockdev writev readv 8 blocks ...passed 00:10:31.371 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.371 Test: blockdev writev readv block ...passed 00:10:31.371 Test: blockdev writev readv size > 128k ...passed 00:10:31.371 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.371 Test: blockdev comparev and writev ...[2024-12-05 19:28:50.343980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1634000 len:0x1000 00:10:31.371 [2024-12-05 19:28:50.344107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.371 passed 00:10:31.371 Test: blockdev nvme passthru rw ...passed 00:10:31.371 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:28:50.345010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:10:31.371 [2024-12-05 19:28:50.345087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:10:31.371 passed 00:10:31.371 Test: blockdev nvme admin passthru ...passed 00:10:31.371 Test: blockdev copy ...passed 00:10:31.371 Suite: bdevio tests on: Nvme1n1p2 00:10:31.371 Test: blockdev write read block ...passed 00:10:31.371 Test: blockdev write zeroes read block ...passed 00:10:31.371 Test: blockdev write zeroes read no split ...passed 00:10:31.629 Test: blockdev write zeroes read split ...passed 00:10:31.629 Test: blockdev write zeroes read split partial ...passed 00:10:31.629 Test: blockdev reset ...[2024-12-05 19:28:50.395390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:31.629 [2024-12-05 19:28:50.398065] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:31.629 passed 00:10:31.629 Test: blockdev write read 8 blocks ...passed 00:10:31.629 Test: blockdev write read size > 128k ...passed 00:10:31.629 Test: blockdev write read invalid size ...passed 00:10:31.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.629 Test: blockdev write read max offset ...passed 00:10:31.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.629 Test: blockdev writev readv 8 blocks ...passed 00:10:31.629 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.629 Test: blockdev writev readv block ...passed 00:10:31.629 Test: blockdev writev readv size > 128k ...passed 00:10:31.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.629 Test: blockdev comparev and writev ...[2024-12-05 19:28:50.403924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c1630000 len:0x1000 00:10:31.629 [2024-12-05 19:28:50.403970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.629 passed 00:10:31.629 Test: blockdev nvme passthru rw ...passed 00:10:31.629 Test: blockdev nvme passthru vendor specific ...passed 00:10:31.629 Test: blockdev nvme admin passthru ...passed 00:10:31.629 Test: blockdev copy ...passed 00:10:31.629 Suite: bdevio tests on: Nvme1n1p1 00:10:31.629 Test: blockdev write read block ...passed 00:10:31.629 Test: blockdev write zeroes read block ...passed 00:10:31.629 Test: blockdev write zeroes read no split ...passed 00:10:31.629 Test: blockdev write zeroes read split ...passed 00:10:31.629 Test: blockdev write zeroes read split partial ...passed 00:10:31.629 Test: blockdev reset ...[2024-12-05 19:28:50.442369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:10:31.629 [2024-12-05 19:28:50.444818] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:10:31.629 passed 00:10:31.629 Test: blockdev write read 8 blocks ...passed 00:10:31.629 Test: blockdev write read size > 128k ...passed 00:10:31.629 Test: blockdev write read invalid size ...passed 00:10:31.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.629 Test: blockdev write read max offset ...passed 00:10:31.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.629 Test: blockdev writev readv 8 blocks ...passed 00:10:31.629 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.629 Test: blockdev writev readv block ...passed 00:10:31.629 Test: blockdev writev readv size > 128k ...passed 00:10:31.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.629 Test: blockdev comparev and writev ...[2024-12-05 19:28:50.450604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2bbc0e000 len:0x1000 00:10:31.629 [2024-12-05 19:28:50.450648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:10:31.629 passed 00:10:31.629 Test: blockdev nvme passthru rw ...passed 00:10:31.629 Test: blockdev nvme passthru vendor specific ...passed 00:10:31.629 Test: blockdev nvme admin passthru ...passed 00:10:31.629 Test: blockdev copy ...passed 00:10:31.629 Suite: bdevio tests on: Nvme0n1 00:10:31.629 Test: blockdev write read block ...passed 00:10:31.629 Test: blockdev write zeroes read block ...passed 00:10:31.629 Test: blockdev write zeroes read no split ...passed 00:10:31.629 Test: blockdev write zeroes read split ...passed 00:10:31.629 Test: blockdev write zeroes read split partial ...passed 00:10:31.629 Test: blockdev reset ...[2024-12-05 19:28:50.489202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:31.629 [2024-12-05 19:28:50.491875] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:31.629 passed 00:10:31.629 Test: blockdev write read 8 blocks ...passed 00:10:31.629 Test: blockdev write read size > 128k ...passed 00:10:31.629 Test: blockdev write read invalid size ...passed 00:10:31.629 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:31.629 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:31.629 Test: blockdev write read max offset ...passed 00:10:31.629 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:31.629 Test: blockdev writev readv 8 blocks ...passed 00:10:31.629 Test: blockdev writev readv 30 x 1block ...passed 00:10:31.629 Test: blockdev writev readv block ...passed 00:10:31.629 Test: blockdev writev readv size > 128k ...passed 00:10:31.629 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:31.629 Test: blockdev comparev and writev ...passed 00:10:31.629 Test: blockdev nvme passthru rw ...[2024-12-05 19:28:50.496709] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:10:31.629 separate metadata which is not supported yet. 00:10:31.629 passed 00:10:31.629 Test: blockdev nvme passthru vendor specific ...[2024-12-05 19:28:50.497082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:10:31.629 passed 00:10:31.629 Test: blockdev nvme admin passthru ...[2024-12-05 19:28:50.497122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:10:31.629 passed 00:10:31.629 Test: blockdev copy ...passed 00:10:31.629 00:10:31.629 Run Summary: Type Total Ran Passed Failed Inactive 00:10:31.629 suites 7 7 n/a 0 0 00:10:31.629 tests 161 161 161 0 0 00:10:31.629 asserts 1025 1025 1025 0 n/a 00:10:31.629 00:10:31.629 Elapsed time = 1.022 seconds 00:10:31.629 0 00:10:31.629 19:28:50 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61638 00:10:31.629 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61638 ']' 00:10:31.629 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61638 00:10:31.629 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:10:31.629 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.629 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61638 00:10:31.630 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.630 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.630 killing process with pid 61638 00:10:31.630 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61638' 00:10:31.630 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61638 00:10:31.630 19:28:50 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61638 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:10:32.569 00:10:32.569 real 0m2.099s 00:10:32.569 user 0m5.327s 00:10:32.569 sys 0m0.292s 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:10:32.569 ************************************ 00:10:32.569 END TEST bdev_bounds 00:10:32.569 ************************************ 00:10:32.569 19:28:51 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:32.569 19:28:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.569 19:28:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.569 19:28:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:32.569 ************************************ 00:10:32.569 START TEST bdev_nbd 00:10:32.569 ************************************ 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61698 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61698 /var/tmp/spdk-nbd.sock 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61698 ']' 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:10:32.569 19:28:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:32.569 [2024-12-05 19:28:51.321168] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:32.569 [2024-12-05 19:28:51.321288] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.569 [2024-12-05 19:28:51.475521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.827 [2024-12-05 19:28:51.576178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.393 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.653 1+0 records in 00:10:33.653 1+0 records out 00:10:33.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446934 s, 9.2 MB/s 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.653 1+0 records in 00:10:33.653 1+0 records out 00:10:33.653 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320908 s, 12.8 MB/s 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:33.653 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:33.912 1+0 records in 00:10:33.912 1+0 records out 00:10:33.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373963 s, 11.0 MB/s 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:33.912 19:28:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.174 1+0 records in 00:10:34.174 1+0 records out 00:10:34.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474487 s, 8.6 MB/s 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:34.174 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.453 1+0 records in 00:10:34.453 1+0 records out 00:10:34.453 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420091 s, 9.8 MB/s 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:34.453 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:34.765 1+0 records in 00:10:34.765 1+0 records out 00:10:34.765 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325477 s, 12.6 MB/s 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:34.765 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:35.026 1+0 records in 00:10:35.026 1+0 records out 00:10:35.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379204 s, 10.8 MB/s 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:10:35.026 19:28:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:35.026 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:10:35.026 { 00:10:35.026 "nbd_device": "/dev/nbd0", 00:10:35.026 "bdev_name": "Nvme0n1" 00:10:35.026 }, 00:10:35.026 { 00:10:35.026 "nbd_device": "/dev/nbd1", 00:10:35.026 "bdev_name": "Nvme1n1p1" 00:10:35.026 }, 00:10:35.026 { 00:10:35.026 "nbd_device": "/dev/nbd2", 00:10:35.026 "bdev_name": "Nvme1n1p2" 00:10:35.026 }, 00:10:35.026 { 00:10:35.026 "nbd_device": "/dev/nbd3", 00:10:35.026 "bdev_name": "Nvme2n1" 00:10:35.026 }, 00:10:35.026 { 00:10:35.026 "nbd_device": "/dev/nbd4", 00:10:35.026 "bdev_name": "Nvme2n2" 00:10:35.026 }, 00:10:35.027 { 00:10:35.027 "nbd_device": "/dev/nbd5", 00:10:35.027 "bdev_name": "Nvme2n3" 00:10:35.027 }, 00:10:35.027 { 00:10:35.027 "nbd_device": "/dev/nbd6", 00:10:35.027 "bdev_name": "Nvme3n1" 00:10:35.027 } 00:10:35.027 ]' 00:10:35.027 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:10:35.027 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:10:35.027 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:10:35.027 { 00:10:35.027 "nbd_device": "/dev/nbd0", 00:10:35.027 "bdev_name": "Nvme0n1" 00:10:35.027 }, 00:10:35.027 { 00:10:35.027 "nbd_device": "/dev/nbd1", 00:10:35.027 "bdev_name": "Nvme1n1p1" 00:10:35.027 }, 00:10:35.027 { 00:10:35.027 "nbd_device": "/dev/nbd2", 00:10:35.027 "bdev_name": "Nvme1n1p2" 00:10:35.027 }, 00:10:35.027 { 00:10:35.027 "nbd_device": "/dev/nbd3", 00:10:35.027 "bdev_name": "Nvme2n1" 00:10:35.027 }, 00:10:35.027 { 00:10:35.027 "nbd_device": "/dev/nbd4", 00:10:35.027 "bdev_name": "Nvme2n2" 00:10:35.027 }, 00:10:35.027 { 00:10:35.027 "nbd_device": "/dev/nbd5", 00:10:35.027 "bdev_name": "Nvme2n3" 00:10:35.027 }, 00:10:35.027 { 00:10:35.027 "nbd_device": "/dev/nbd6", 00:10:35.027 "bdev_name": "Nvme3n1" 00:10:35.027 } 00:10:35.027 ]' 00:10:35.288 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:10:35.288 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:35.288 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:10:35.288 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:35.288 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.289 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.549 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:35.807 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.065 19:28:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:36.321 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:36.884 19:28:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:10:37.142 /dev/nbd0 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.142 1+0 records in 00:10:37.142 1+0 records out 00:10:37.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000467154 s, 8.8 MB/s 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:37.142 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:10:37.403 /dev/nbd1 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.403 1+0 records in 00:10:37.403 1+0 records out 00:10:37.403 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000516283 s, 7.9 MB/s 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:37.403 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:10:37.662 /dev/nbd10 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.662 1+0 records in 00:10:37.662 1+0 records out 00:10:37.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520885 s, 7.9 MB/s 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:37.662 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:10:37.921 /dev/nbd11 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:37.921 1+0 records in 00:10:37.921 1+0 records out 00:10:37.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421353 s, 9.7 MB/s 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:37.921 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:10:38.179 /dev/nbd12 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.179 19:28:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.179 1+0 records in 00:10:38.179 1+0 records out 00:10:38.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337109 s, 12.2 MB/s 00:10:38.179 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.179 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.179 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.179 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.179 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.179 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.179 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:38.179 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:10:38.438 /dev/nbd13 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.438 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.439 1+0 records in 00:10:38.439 1+0 records out 00:10:38.439 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406506 s, 10.1 MB/s 00:10:38.439 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.439 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.439 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.439 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.439 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.439 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.439 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:38.439 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:10:38.439 /dev/nbd14 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:10:38.698 1+0 records in 00:10:38.698 1+0 records out 00:10:38.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000428621 s, 9.6 MB/s 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd0", 00:10:38.698 "bdev_name": "Nvme0n1" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd1", 00:10:38.698 "bdev_name": "Nvme1n1p1" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd10", 00:10:38.698 "bdev_name": "Nvme1n1p2" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd11", 00:10:38.698 "bdev_name": "Nvme2n1" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd12", 00:10:38.698 "bdev_name": "Nvme2n2" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd13", 00:10:38.698 "bdev_name": "Nvme2n3" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd14", 00:10:38.698 "bdev_name": "Nvme3n1" 00:10:38.698 } 00:10:38.698 ]' 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd0", 00:10:38.698 "bdev_name": "Nvme0n1" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd1", 00:10:38.698 "bdev_name": "Nvme1n1p1" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd10", 00:10:38.698 "bdev_name": "Nvme1n1p2" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd11", 00:10:38.698 "bdev_name": "Nvme2n1" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd12", 00:10:38.698 "bdev_name": "Nvme2n2" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd13", 00:10:38.698 "bdev_name": "Nvme2n3" 00:10:38.698 }, 00:10:38.698 { 00:10:38.698 "nbd_device": "/dev/nbd14", 00:10:38.698 "bdev_name": "Nvme3n1" 00:10:38.698 } 00:10:38.698 ]' 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:10:38.698 /dev/nbd1 00:10:38.698 /dev/nbd10 00:10:38.698 /dev/nbd11 00:10:38.698 /dev/nbd12 00:10:38.698 /dev/nbd13 00:10:38.698 /dev/nbd14' 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:10:38.698 /dev/nbd1 00:10:38.698 /dev/nbd10 00:10:38.698 /dev/nbd11 00:10:38.698 /dev/nbd12 00:10:38.698 /dev/nbd13 00:10:38.698 /dev/nbd14' 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:10:38.698 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:38.699 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:38.699 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:10:38.699 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:38.699 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:10:38.699 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:10:38.959 256+0 records in 00:10:38.959 256+0 records out 00:10:38.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00770897 s, 136 MB/s 00:10:38.959 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.959 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:10:38.959 256+0 records in 00:10:38.959 256+0 records out 00:10:38.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0741003 s, 14.2 MB/s 00:10:38.959 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.959 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:10:38.959 256+0 records in 00:10:38.959 256+0 records out 00:10:38.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.075482 s, 13.9 MB/s 00:10:38.959 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.959 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:10:38.959 256+0 records in 00:10:38.959 256+0 records out 00:10:38.959 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0752755 s, 13.9 MB/s 00:10:38.959 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:38.959 19:28:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:10:39.219 256+0 records in 00:10:39.219 256+0 records out 00:10:39.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0807405 s, 13.0 MB/s 00:10:39.219 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.219 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:10:39.219 256+0 records in 00:10:39.219 256+0 records out 00:10:39.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0752017 s, 13.9 MB/s 00:10:39.219 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.219 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:10:39.219 256+0 records in 00:10:39.219 256+0 records out 00:10:39.219 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0720626 s, 14.6 MB/s 00:10:39.219 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:10:39.219 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:10:39.480 256+0 records in 00:10:39.480 256+0 records out 00:10:39.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0737771 s, 14.2 MB/s 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.480 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:10:39.740 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.001 19:28:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.262 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.522 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:40.782 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.043 19:28:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:10:41.339 malloc_lvol_verify 00:10:41.339 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:10:41.599 e470fe5e-6b43-487a-bcea-d60c839eb394 00:10:41.599 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:10:41.859 dc6efb43-7e87-4fa2-86ae-f700ca692ea5 00:10:41.859 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:10:42.120 /dev/nbd0 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:10:42.120 mke2fs 1.47.0 (5-Feb-2023) 00:10:42.120 Discarding device blocks: 0/4096 done 00:10:42.120 Creating filesystem with 4096 1k blocks and 1024 inodes 00:10:42.120 00:10:42.120 Allocating group tables: 0/1 done 00:10:42.120 Writing inode tables: 0/1 done 00:10:42.120 Creating journal (1024 blocks): done 00:10:42.120 Writing superblocks and filesystem accounting information: 0/1 done 00:10:42.120 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:10:42.120 19:29:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61698 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61698 ']' 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61698 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61698 00:10:42.382 killing process with pid 61698 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61698' 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61698 00:10:42.382 19:29:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61698 00:10:44.919 19:29:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:10:44.919 00:10:44.919 real 0m12.109s 00:10:44.919 user 0m16.569s 00:10:44.919 sys 0m3.715s 00:10:44.919 19:29:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.919 19:29:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:10:44.919 ************************************ 00:10:44.919 END TEST bdev_nbd 00:10:44.919 ************************************ 00:10:44.919 skipping fio tests on NVMe due to multi-ns failures. 00:10:44.919 19:29:03 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:10:44.919 19:29:03 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:10:44.919 19:29:03 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:10:44.919 19:29:03 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:10:44.919 19:29:03 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:10:44.920 19:29:03 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:44.920 19:29:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:44.920 19:29:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.920 19:29:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:44.920 ************************************ 00:10:44.920 START TEST bdev_verify 00:10:44.920 ************************************ 00:10:44.920 19:29:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:10:44.920 [2024-12-05 19:29:03.465103] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:44.920 [2024-12-05 19:29:03.465243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62107 ] 00:10:44.920 [2024-12-05 19:29:03.629029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:44.920 [2024-12-05 19:29:03.730719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.920 [2024-12-05 19:29:03.730939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.485 Running I/O for 5 seconds... 00:10:47.794 22720.00 IOPS, 88.75 MiB/s [2024-12-05T19:29:07.731Z] 23744.00 IOPS, 92.75 MiB/s [2024-12-05T19:29:08.747Z] 23637.33 IOPS, 92.33 MiB/s [2024-12-05T19:29:09.699Z] 23808.00 IOPS, 93.00 MiB/s [2024-12-05T19:29:09.699Z] 23731.20 IOPS, 92.70 MiB/s 00:10:50.693 Latency(us) 00:10:50.693 [2024-12-05T19:29:09.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.693 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x0 length 0xbd0bd 00:10:50.693 Nvme0n1 : 5.06 1681.42 6.57 0.00 0.00 75736.58 9729.58 78239.90 00:10:50.693 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:10:50.693 Nvme0n1 : 5.05 1647.25 6.43 0.00 0.00 77376.79 16434.41 85095.98 00:10:50.693 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x0 length 0x4ff80 00:10:50.693 Nvme1n1p1 : 5.06 1680.92 6.57 0.00 0.00 75651.63 9729.58 69770.63 00:10:50.693 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x4ff80 length 0x4ff80 00:10:50.693 Nvme1n1p1 : 5.05 1646.72 6.43 0.00 0.00 77171.10 15930.29 71787.13 00:10:50.693 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x0 length 0x4ff7f 00:10:50.693 Nvme1n1p2 : 5.08 1688.94 6.60 0.00 0.00 75413.86 10788.23 66947.54 00:10:50.693 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:10:50.693 Nvme1n1p2 : 5.07 1652.55 6.46 0.00 0.00 76763.58 5595.77 69367.34 00:10:50.693 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x0 length 0x80000 00:10:50.693 Nvme2n1 : 5.08 1688.48 6.60 0.00 0.00 75286.90 11040.30 62107.96 00:10:50.693 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x80000 length 0x80000 00:10:50.693 Nvme2n1 : 5.08 1661.36 6.49 0.00 0.00 76318.67 9124.63 68157.44 00:10:50.693 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x0 length 0x80000 00:10:50.693 Nvme2n2 : 5.08 1688.04 6.59 0.00 0.00 75155.13 10939.47 64931.05 00:10:50.693 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x80000 length 0x80000 00:10:50.693 Nvme2n2 : 5.09 1660.93 6.49 0.00 0.00 76166.41 9225.45 73400.32 00:10:50.693 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x0 length 0x80000 00:10:50.693 Nvme2n3 : 5.08 1687.54 6.59 0.00 0.00 75001.78 10838.65 67350.84 00:10:50.693 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x80000 length 0x80000 00:10:50.693 Nvme2n3 : 5.09 1660.48 6.49 0.00 0.00 76028.72 9427.10 75013.51 00:10:50.693 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x0 length 0x20000 00:10:50.693 Nvme3n1 : 5.08 1687.00 6.59 0.00 0.00 74850.97 9830.40 71383.83 00:10:50.693 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:10:50.693 Verification LBA range: start 0x20000 length 0x20000 00:10:50.693 Nvme3n1 : 5.09 1660.02 6.48 0.00 0.00 75950.58 9527.93 76223.41 00:10:50.693 [2024-12-05T19:29:09.699Z] =================================================================================================================== 00:10:50.693 [2024-12-05T19:29:09.699Z] Total : 23391.64 91.37 0.00 0.00 75911.67 5595.77 85095.98 00:10:52.589 00:10:52.589 real 0m7.729s 00:10:52.589 user 0m14.545s 00:10:52.589 sys 0m0.225s 00:10:52.589 ************************************ 00:10:52.589 END TEST bdev_verify 00:10:52.589 ************************************ 00:10:52.589 19:29:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.589 19:29:11 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:10:52.589 19:29:11 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:52.589 19:29:11 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:10:52.589 19:29:11 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.589 19:29:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:10:52.589 ************************************ 00:10:52.589 START TEST bdev_verify_big_io 00:10:52.589 ************************************ 00:10:52.589 19:29:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:10:52.589 [2024-12-05 19:29:11.233279] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:10:52.589 [2024-12-05 19:29:11.233401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62205 ] 00:10:52.589 [2024-12-05 19:29:11.393642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:52.589 [2024-12-05 19:29:11.496239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.589 [2024-12-05 19:29:11.496464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.522 Running I/O for 5 seconds... 00:10:58.774 1234.00 IOPS, 77.12 MiB/s [2024-12-05T19:29:18.716Z] 2728.00 IOPS, 170.50 MiB/s [2024-12-05T19:29:18.716Z] 3088.00 IOPS, 193.00 MiB/s 00:10:59.710 Latency(us) 00:10:59.710 [2024-12-05T19:29:18.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:59.710 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.710 Verification LBA range: start 0x0 length 0xbd0b 00:10:59.710 Nvme0n1 : 5.93 97.10 6.07 0.00 0.00 1222813.28 18753.38 1438968.91 00:10:59.710 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.710 Verification LBA range: start 0xbd0b length 0xbd0b 00:10:59.710 Nvme0n1 : 5.83 87.87 5.49 0.00 0.00 1377368.81 23189.66 1703532.70 00:10:59.710 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.710 Verification LBA range: start 0x0 length 0x4ff8 00:10:59.710 Nvme1n1p1 : 6.06 100.64 6.29 0.00 0.00 1155402.49 105664.20 1219574.55 00:10:59.710 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.710 Verification LBA range: start 0x4ff8 length 0x4ff8 00:10:59.710 Nvme1n1p1 : 5.94 101.58 6.35 0.00 0.00 1154057.61 96791.63 1406705.03 00:10:59.710 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.710 Verification LBA range: start 0x0 length 0x4ff7 00:10:59.711 Nvme1n1p2 : 6.14 103.67 6.48 0.00 0.00 1099420.10 126635.72 1077613.49 00:10:59.711 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.711 Verification LBA range: start 0x4ff7 length 0x4ff7 00:10:59.711 Nvme1n1p2 : 5.94 107.77 6.74 0.00 0.00 1071336.53 107277.39 1193763.45 00:10:59.711 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.711 Verification LBA range: start 0x0 length 0x8000 00:10:59.711 Nvme2n1 : 6.06 96.14 6.01 0.00 0.00 1146143.79 126635.72 2193943.63 00:10:59.711 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.711 Verification LBA range: start 0x8000 length 0x8000 00:10:59.711 Nvme2n1 : 6.07 110.92 6.93 0.00 0.00 997911.71 88322.36 1219574.55 00:10:59.711 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.711 Verification LBA range: start 0x0 length 0x8000 00:10:59.711 Nvme2n2 : 6.19 105.82 6.61 0.00 0.00 1019891.32 40733.14 2232660.28 00:10:59.711 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.711 Verification LBA range: start 0x8000 length 0x8000 00:10:59.711 Nvme2n2 : 6.07 115.50 7.22 0.00 0.00 933127.21 38515.00 1238932.87 00:10:59.711 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.711 Verification LBA range: start 0x0 length 0x8000 00:10:59.711 Nvme2n3 : 6.28 109.80 6.86 0.00 0.00 941993.55 43959.53 2271376.94 00:10:59.711 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.711 Verification LBA range: start 0x8000 length 0x8000 00:10:59.711 Nvme2n3 : 6.14 124.99 7.81 0.00 0.00 833395.66 39523.25 1271196.75 00:10:59.711 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:10:59.711 Verification LBA range: start 0x0 length 0x2000 00:10:59.711 Nvme3n1 : 6.29 124.53 7.78 0.00 0.00 808020.89 4461.49 2310093.59 00:10:59.711 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:10:59.711 Verification LBA range: start 0x2000 length 0x2000 00:10:59.711 Nvme3n1 : 6.27 153.05 9.57 0.00 0.00 660438.14 614.40 1303460.63 00:10:59.711 [2024-12-05T19:29:18.717Z] =================================================================================================================== 00:10:59.711 [2024-12-05T19:29:18.717Z] Total : 1539.37 96.21 0.00 0.00 1002551.79 614.40 2310093.59 00:11:01.102 00:11:01.102 real 0m8.775s 00:11:01.102 user 0m16.602s 00:11:01.102 sys 0m0.249s 00:11:01.102 19:29:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.102 ************************************ 00:11:01.102 END TEST bdev_verify_big_io 00:11:01.102 ************************************ 00:11:01.102 19:29:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:11:01.102 19:29:19 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:01.102 19:29:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:01.102 19:29:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.102 19:29:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:01.102 ************************************ 00:11:01.102 START TEST bdev_write_zeroes 00:11:01.102 ************************************ 00:11:01.102 19:29:20 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:01.102 [2024-12-05 19:29:20.073939] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:11:01.102 [2024-12-05 19:29:20.074064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62320 ] 00:11:01.360 [2024-12-05 19:29:20.234330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.360 [2024-12-05 19:29:20.336623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.926 Running I/O for 1 seconds... 00:11:03.299 64017.00 IOPS, 250.07 MiB/s 00:11:03.299 Latency(us) 00:11:03.299 [2024-12-05T19:29:22.305Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:03.299 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.299 Nvme0n1 : 1.02 9072.14 35.44 0.00 0.00 14075.14 6074.68 30852.33 00:11:03.299 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.299 Nvme1n1p1 : 1.03 9106.80 35.57 0.00 0.00 14004.35 11645.24 31860.58 00:11:03.299 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.299 Nvme1n1p2 : 1.03 9095.72 35.53 0.00 0.00 13980.25 10939.47 30852.33 00:11:03.299 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.299 Nvme2n1 : 1.03 9085.48 35.49 0.00 0.00 13967.31 10637.00 29844.09 00:11:03.299 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.299 Nvme2n2 : 1.03 9075.23 35.45 0.00 0.00 13921.84 7662.67 28835.84 00:11:03.299 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.299 Nvme2n3 : 1.03 9065.00 35.41 0.00 0.00 13914.16 7007.31 28230.89 00:11:03.299 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:11:03.299 Nvme3n1 : 1.03 9054.85 35.37 0.00 0.00 13909.55 6805.66 29239.14 00:11:03.299 [2024-12-05T19:29:22.305Z] =================================================================================================================== 00:11:03.299 [2024-12-05T19:29:22.305Z] Total : 63555.22 248.26 0.00 0.00 13967.44 6074.68 31860.58 00:11:03.865 00:11:03.865 real 0m2.706s 00:11:03.865 user 0m2.394s 00:11:03.865 sys 0m0.191s 00:11:03.865 19:29:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.865 ************************************ 00:11:03.865 END TEST bdev_write_zeroes 00:11:03.865 19:29:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:11:03.865 ************************************ 00:11:03.865 19:29:22 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.865 19:29:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:03.865 19:29:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.865 19:29:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:03.865 ************************************ 00:11:03.865 START TEST bdev_json_nonenclosed 00:11:03.865 ************************************ 00:11:03.865 19:29:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:03.865 [2024-12-05 19:29:22.839192] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:11:03.865 [2024-12-05 19:29:22.839312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62373 ] 00:11:04.123 [2024-12-05 19:29:22.999465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.123 [2024-12-05 19:29:23.099057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.123 [2024-12-05 19:29:23.099150] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:11:04.123 [2024-12-05 19:29:23.099167] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:04.123 [2024-12-05 19:29:23.099177] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.381 00:11:04.381 real 0m0.503s 00:11:04.381 user 0m0.315s 00:11:04.381 sys 0m0.084s 00:11:04.381 19:29:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.381 ************************************ 00:11:04.381 END TEST bdev_json_nonenclosed 00:11:04.381 ************************************ 00:11:04.381 19:29:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:11:04.381 19:29:23 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.381 19:29:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:11:04.381 19:29:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.381 19:29:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:04.381 ************************************ 00:11:04.381 START TEST bdev_json_nonarray 00:11:04.381 ************************************ 00:11:04.381 19:29:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:11:04.639 [2024-12-05 19:29:23.388841] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:11:04.639 [2024-12-05 19:29:23.388959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62393 ] 00:11:04.639 [2024-12-05 19:29:23.546657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:04.897 [2024-12-05 19:29:23.647981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.897 [2024-12-05 19:29:23.648070] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:11:04.897 [2024-12-05 19:29:23.648088] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:11:04.897 [2024-12-05 19:29:23.648097] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:04.897 00:11:04.897 real 0m0.503s 00:11:04.897 user 0m0.311s 00:11:04.897 sys 0m0.088s 00:11:04.897 ************************************ 00:11:04.897 END TEST bdev_json_nonarray 00:11:04.897 ************************************ 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:11:04.897 19:29:23 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:11:04.897 19:29:23 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:11:04.897 19:29:23 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:11:04.897 19:29:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:04.897 19:29:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.897 19:29:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:04.897 ************************************ 00:11:04.897 START TEST bdev_gpt_uuid 00:11:04.897 ************************************ 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62424 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62424 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 62424 ']' 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:04.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:04.897 19:29:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:05.155 [2024-12-05 19:29:23.956267] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:11:05.155 [2024-12-05 19:29:23.956367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62424 ] 00:11:05.155 [2024-12-05 19:29:24.112216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.412 [2024-12-05 19:29:24.213004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.976 19:29:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:05.977 19:29:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:11:05.977 19:29:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:05.977 19:29:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.977 19:29:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:06.233 Some configs were skipped because the RPC state that can call them passed over. 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.233 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:11:06.233 { 00:11:06.233 "name": "Nvme1n1p1", 00:11:06.233 "aliases": [ 00:11:06.233 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:11:06.233 ], 00:11:06.233 "product_name": "GPT Disk", 00:11:06.233 "block_size": 4096, 00:11:06.234 "num_blocks": 655104, 00:11:06.234 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:06.234 "assigned_rate_limits": { 00:11:06.234 "rw_ios_per_sec": 0, 00:11:06.234 "rw_mbytes_per_sec": 0, 00:11:06.234 "r_mbytes_per_sec": 0, 00:11:06.234 "w_mbytes_per_sec": 0 00:11:06.234 }, 00:11:06.234 "claimed": false, 00:11:06.234 "zoned": false, 00:11:06.234 "supported_io_types": { 00:11:06.234 "read": true, 00:11:06.234 "write": true, 00:11:06.234 "unmap": true, 00:11:06.234 "flush": true, 00:11:06.234 "reset": true, 00:11:06.234 "nvme_admin": false, 00:11:06.234 "nvme_io": false, 00:11:06.234 "nvme_io_md": false, 00:11:06.234 "write_zeroes": true, 00:11:06.234 "zcopy": false, 00:11:06.234 "get_zone_info": false, 00:11:06.234 "zone_management": false, 00:11:06.234 "zone_append": false, 00:11:06.234 "compare": true, 00:11:06.234 "compare_and_write": false, 00:11:06.234 "abort": true, 00:11:06.234 "seek_hole": false, 00:11:06.234 "seek_data": false, 00:11:06.234 "copy": true, 00:11:06.234 "nvme_iov_md": false 00:11:06.234 }, 00:11:06.234 "driver_specific": { 00:11:06.234 "gpt": { 00:11:06.234 "base_bdev": "Nvme1n1", 00:11:06.234 "offset_blocks": 256, 00:11:06.234 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:11:06.234 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:11:06.234 "partition_name": "SPDK_TEST_first" 00:11:06.234 } 00:11:06.234 } 00:11:06.234 } 00:11:06.234 ]' 00:11:06.234 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:11:06.234 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:11:06.234 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:11:06.234 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:06.234 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:06.490 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:11:06.490 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:11:06.490 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:11:06.491 { 00:11:06.491 "name": "Nvme1n1p2", 00:11:06.491 "aliases": [ 00:11:06.491 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:11:06.491 ], 00:11:06.491 "product_name": "GPT Disk", 00:11:06.491 "block_size": 4096, 00:11:06.491 "num_blocks": 655103, 00:11:06.491 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:06.491 "assigned_rate_limits": { 00:11:06.491 "rw_ios_per_sec": 0, 00:11:06.491 "rw_mbytes_per_sec": 0, 00:11:06.491 "r_mbytes_per_sec": 0, 00:11:06.491 "w_mbytes_per_sec": 0 00:11:06.491 }, 00:11:06.491 "claimed": false, 00:11:06.491 "zoned": false, 00:11:06.491 "supported_io_types": { 00:11:06.491 "read": true, 00:11:06.491 "write": true, 00:11:06.491 "unmap": true, 00:11:06.491 "flush": true, 00:11:06.491 "reset": true, 00:11:06.491 "nvme_admin": false, 00:11:06.491 "nvme_io": false, 00:11:06.491 "nvme_io_md": false, 00:11:06.491 "write_zeroes": true, 00:11:06.491 "zcopy": false, 00:11:06.491 "get_zone_info": false, 00:11:06.491 "zone_management": false, 00:11:06.491 "zone_append": false, 00:11:06.491 "compare": true, 00:11:06.491 "compare_and_write": false, 00:11:06.491 "abort": true, 00:11:06.491 "seek_hole": false, 00:11:06.491 "seek_data": false, 00:11:06.491 "copy": true, 00:11:06.491 "nvme_iov_md": false 00:11:06.491 }, 00:11:06.491 "driver_specific": { 00:11:06.491 "gpt": { 00:11:06.491 "base_bdev": "Nvme1n1", 00:11:06.491 "offset_blocks": 655360, 00:11:06.491 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:11:06.491 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:11:06.491 "partition_name": "SPDK_TEST_second" 00:11:06.491 } 00:11:06.491 } 00:11:06.491 } 00:11:06.491 ]' 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 62424 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 62424 ']' 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 62424 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62424 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.491 killing process with pid 62424 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62424' 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 62424 00:11:06.491 19:29:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 62424 00:11:08.381 00:11:08.381 real 0m3.002s 00:11:08.381 user 0m3.167s 00:11:08.381 sys 0m0.361s 00:11:08.381 ************************************ 00:11:08.381 END TEST bdev_gpt_uuid 00:11:08.381 ************************************ 00:11:08.381 19:29:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.381 19:29:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:11:08.381 19:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:11:08.381 19:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:11:08.381 19:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:11:08.381 19:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:11:08.381 19:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:11:08.381 19:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:11:08.381 19:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:11:08.381 19:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:11:08.381 19:29:26 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:08.381 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:08.381 Waiting for block devices as requested 00:11:08.638 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.638 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.638 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:08.895 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:14.186 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:14.186 19:29:32 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:11:14.186 19:29:32 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:11:14.186 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:11:14.186 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:11:14.186 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:11:14.186 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:11:14.186 19:29:33 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:11:14.186 00:11:14.186 real 0m57.622s 00:11:14.186 user 1m13.506s 00:11:14.186 sys 0m7.805s 00:11:14.186 19:29:33 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.186 ************************************ 00:11:14.186 END TEST blockdev_nvme_gpt 00:11:14.186 ************************************ 00:11:14.186 19:29:33 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:11:14.186 19:29:33 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:14.186 19:29:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:14.186 19:29:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.186 19:29:33 -- common/autotest_common.sh@10 -- # set +x 00:11:14.186 ************************************ 00:11:14.186 START TEST nvme 00:11:14.186 ************************************ 00:11:14.186 19:29:33 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:11:14.445 * Looking for test storage... 00:11:14.445 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:14.445 19:29:33 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.445 19:29:33 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.445 19:29:33 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.445 19:29:33 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.445 19:29:33 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.445 19:29:33 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.445 19:29:33 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.445 19:29:33 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.445 19:29:33 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.445 19:29:33 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.445 19:29:33 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.445 19:29:33 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.445 19:29:33 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.445 19:29:33 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.445 19:29:33 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.445 19:29:33 nvme -- scripts/common.sh@344 -- # case "$op" in 00:11:14.445 19:29:33 nvme -- scripts/common.sh@345 -- # : 1 00:11:14.445 19:29:33 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.445 19:29:33 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.445 19:29:33 nvme -- scripts/common.sh@365 -- # decimal 1 00:11:14.445 19:29:33 nvme -- scripts/common.sh@353 -- # local d=1 00:11:14.445 19:29:33 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.445 19:29:33 nvme -- scripts/common.sh@355 -- # echo 1 00:11:14.445 19:29:33 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.445 19:29:33 nvme -- scripts/common.sh@366 -- # decimal 2 00:11:14.445 19:29:33 nvme -- scripts/common.sh@353 -- # local d=2 00:11:14.445 19:29:33 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.445 19:29:33 nvme -- scripts/common.sh@355 -- # echo 2 00:11:14.445 19:29:33 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.445 19:29:33 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.445 19:29:33 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.445 19:29:33 nvme -- scripts/common.sh@368 -- # return 0 00:11:14.445 19:29:33 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.445 19:29:33 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.445 --rc genhtml_branch_coverage=1 00:11:14.445 --rc genhtml_function_coverage=1 00:11:14.445 --rc genhtml_legend=1 00:11:14.445 --rc geninfo_all_blocks=1 00:11:14.445 --rc geninfo_unexecuted_blocks=1 00:11:14.445 00:11:14.445 ' 00:11:14.445 19:29:33 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.445 --rc genhtml_branch_coverage=1 00:11:14.445 --rc genhtml_function_coverage=1 00:11:14.445 --rc genhtml_legend=1 00:11:14.445 --rc geninfo_all_blocks=1 00:11:14.445 --rc geninfo_unexecuted_blocks=1 00:11:14.445 00:11:14.445 ' 00:11:14.445 19:29:33 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.445 --rc genhtml_branch_coverage=1 00:11:14.445 --rc genhtml_function_coverage=1 00:11:14.445 --rc genhtml_legend=1 00:11:14.445 --rc geninfo_all_blocks=1 00:11:14.445 --rc geninfo_unexecuted_blocks=1 00:11:14.445 00:11:14.445 ' 00:11:14.445 19:29:33 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.445 --rc genhtml_branch_coverage=1 00:11:14.445 --rc genhtml_function_coverage=1 00:11:14.445 --rc genhtml_legend=1 00:11:14.445 --rc geninfo_all_blocks=1 00:11:14.445 --rc geninfo_unexecuted_blocks=1 00:11:14.445 00:11:14.445 ' 00:11:14.445 19:29:33 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:14.704 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:15.270 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.270 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.270 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.270 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:15.270 19:29:34 nvme -- nvme/nvme.sh@79 -- # uname 00:11:15.270 19:29:34 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:11:15.270 19:29:34 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:11:15.270 19:29:34 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:11:15.270 19:29:34 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:11:15.270 19:29:34 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:11:15.270 19:29:34 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:11:15.270 Waiting for stub to ready for secondary processes... 00:11:15.270 19:29:34 nvme -- common/autotest_common.sh@1075 -- # stubpid=63057 00:11:15.270 19:29:34 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:11:15.270 19:29:34 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:11:15.270 19:29:34 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:15.270 19:29:34 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/63057 ]] 00:11:15.270 19:29:34 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:11:15.529 [2024-12-05 19:29:34.304701] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:11:15.529 [2024-12-05 19:29:34.304816] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:11:16.096 [2024-12-05 19:29:35.046345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:16.354 [2024-12-05 19:29:35.140512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:16.354 [2024-12-05 19:29:35.140777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:16.354 [2024-12-05 19:29:35.140802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.354 [2024-12-05 19:29:35.153866] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:11:16.354 [2024-12-05 19:29:35.154004] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:16.354 [2024-12-05 19:29:35.165710] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:11:16.354 [2024-12-05 19:29:35.165899] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:11:16.354 [2024-12-05 19:29:35.167740] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:16.354 [2024-12-05 19:29:35.167933] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:11:16.354 [2024-12-05 19:29:35.168002] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:11:16.354 [2024-12-05 19:29:35.170640] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:16.354 [2024-12-05 19:29:35.170887] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:11:16.354 [2024-12-05 19:29:35.171092] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:11:16.354 [2024-12-05 19:29:35.173929] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:11:16.354 [2024-12-05 19:29:35.174190] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:11:16.354 [2024-12-05 19:29:35.174309] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:11:16.354 [2024-12-05 19:29:35.174393] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:11:16.355 [2024-12-05 19:29:35.174478] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:11:16.355 done. 00:11:16.355 19:29:35 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:11:16.355 19:29:35 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:11:16.355 19:29:35 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:16.355 19:29:35 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:11:16.355 19:29:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.355 19:29:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:16.355 ************************************ 00:11:16.355 START TEST nvme_reset 00:11:16.355 ************************************ 00:11:16.355 19:29:35 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:11:16.612 Initializing NVMe Controllers 00:11:16.612 Skipping QEMU NVMe SSD at 0000:00:10.0 00:11:16.612 Skipping QEMU NVMe SSD at 0000:00:11.0 00:11:16.612 Skipping QEMU NVMe SSD at 0000:00:13.0 00:11:16.612 Skipping QEMU NVMe SSD at 0000:00:12.0 00:11:16.612 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:11:16.612 ************************************ 00:11:16.612 END TEST nvme_reset 00:11:16.612 ************************************ 00:11:16.612 00:11:16.612 real 0m0.209s 00:11:16.612 user 0m0.075s 00:11:16.612 sys 0m0.093s 00:11:16.612 19:29:35 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:16.612 19:29:35 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:11:16.612 19:29:35 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:11:16.612 19:29:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:16.612 19:29:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:16.612 19:29:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:16.612 ************************************ 00:11:16.612 START TEST nvme_identify 00:11:16.612 ************************************ 00:11:16.612 19:29:35 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:11:16.612 19:29:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:11:16.612 19:29:35 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:11:16.612 19:29:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:11:16.612 19:29:35 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:11:16.612 19:29:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:16.612 19:29:35 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:11:16.612 19:29:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:16.613 19:29:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:16.613 19:29:35 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:16.613 19:29:35 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:16.613 19:29:35 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:16.613 19:29:35 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:11:16.873 ===================================================== 00:11:16.873 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:16.873 ===================================================== 00:11:16.873 Controller Capabilities/Features 00:11:16.873 ================================ 00:11:16.873 Vendor ID: 1b36 00:11:16.873 Subsystem Vendor ID: 1af4 00:11:16.873 Serial Number: 12340 00:11:16.873 Model Number: QEMU NVMe Ctrl 00:11:16.873 Firmware Version: 8.0.0 00:11:16.873 Recommended Arb Burst: 6 00:11:16.873 IEEE OUI Identifier: 00 54 52 00:11:16.873 Multi-path I/O 00:11:16.873 May have multiple subsystem ports: No 00:11:16.873 May have multiple controllers: No 00:11:16.873 Associated with SR-IOV VF: No 00:11:16.873 Max Data Transfer Size: 524288 00:11:16.873 Max Number of Namespaces: 256 00:11:16.873 Max Number of I/O Queues: 64 00:11:16.873 NVMe Specification Version (VS): 1.4 00:11:16.873 NVMe Specification Version (Identify): 1.4 00:11:16.873 Maximum Queue Entries: 2048 00:11:16.873 Contiguous Queues Required: Yes 00:11:16.873 Arbitration Mechanisms Supported 00:11:16.873 Weighted Round Robin: Not Supported 00:11:16.873 Vendor Specific: Not Supported 00:11:16.873 Reset Timeout: 7500 ms 00:11:16.873 Doorbell Stride: 4 bytes 00:11:16.873 NVM Subsystem Reset: Not Supported 00:11:16.873 Command Sets Supported 00:11:16.873 NVM Command Set: Supported 00:11:16.873 Boot Partition: Not Supported 00:11:16.873 Memory Page Size Minimum: 4096 bytes 00:11:16.873 Memory Page Size Maximum: 65536 bytes 00:11:16.873 Persistent Memory Region: Not Supported 00:11:16.873 Optional Asynchronous Events Supported 00:11:16.873 Namespace Attribute Notices: Supported 00:11:16.873 Firmware Activation Notices: Not Supported 00:11:16.873 ANA Change Notices: Not Supported 00:11:16.873 PLE Aggregate Log Change Notices: Not Supported 00:11:16.873 LBA Status Info Alert Notices: Not Supported 00:11:16.873 EGE Aggregate Log Change Notices: Not Supported 00:11:16.873 Normal NVM Subsystem Shutdown event: Not Supported 00:11:16.873 Zone Descriptor Change Notices: Not Supported 00:11:16.873 Discovery Log Change Notices: Not Supported 00:11:16.873 Controller Attributes 00:11:16.873 128-bit Host Identifier: Not Supported 00:11:16.873 Non-Operational Permissive Mode: Not Supported 00:11:16.873 NVM Sets: Not Supported 00:11:16.873 Read Recovery Levels: Not Supported 00:11:16.873 Endurance Groups: Not Supported 00:11:16.873 Predictable Latency Mode: Not Supported 00:11:16.873 Traffic Based Keep ALive: Not Supported 00:11:16.873 Namespace Granularity: Not Supported 00:11:16.873 SQ Associations: Not Supported 00:11:16.873 UUID List: Not Supported 00:11:16.873 Multi-Domain Subsystem: Not Supported 00:11:16.873 Fixed Capacity Management: Not Supported 00:11:16.873 Variable Capacity Management: Not Supported 00:11:16.873 Delete Endurance Group: Not Supported 00:11:16.873 Delete NVM Set: Not Supported 00:11:16.873 Extended LBA Formats Supported: Supported 00:11:16.873 Flexible Data Placement Supported: Not Supported 00:11:16.873 00:11:16.873 Controller Memory Buffer Support 00:11:16.873 ================================ 00:11:16.873 Supported: No 00:11:16.873 00:11:16.873 Persistent Memory Region Support 00:11:16.873 ================================ 00:11:16.873 Supported: No 00:11:16.873 00:11:16.873 Admin Command Set Attributes 00:11:16.873 ============================ 00:11:16.873 Security Send/Receive: Not Supported 00:11:16.873 Format NVM: Supported 00:11:16.873 Firmware Activate/Download: Not Supported 00:11:16.873 Namespace Management: Supported 00:11:16.873 Device Self-Test: Not Supported 00:11:16.873 Directives: Supported 00:11:16.873 NVMe-MI: Not Supported 00:11:16.873 Virtualization Management: Not Supported 00:11:16.873 Doorbell Buffer Config: Supported 00:11:16.873 Get LBA Status Capability: Not Supported 00:11:16.873 Command & Feature Lockdown Capability: Not Supported 00:11:16.873 Abort Command Limit: 4 00:11:16.873 Async Event Request Limit: 4 00:11:16.873 Number of Firmware Slots: N/A 00:11:16.873 Firmware Slot 1 Read-Only: N/A 00:11:16.873 Firmware Activation Without Reset: N/A 00:11:16.873 Multiple Update Detection Support: N/A 00:11:16.873 Firmware Update Granularity: No Information Provided 00:11:16.873 Per-Namespace SMART Log: Yes 00:11:16.873 Asymmetric Namespace Access Log Page: Not Supported 00:11:16.873 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:16.873 Command Effects Log Page: Supported 00:11:16.873 Get Log Page Extended Data: Supported 00:11:16.874 Telemetry Log Pages: Not Supported 00:11:16.874 Persistent Event Log Pages: Not Supported 00:11:16.874 Supported Log Pages Log Page: May Support 00:11:16.874 Commands Supported & Effects Log Page: Not Supported 00:11:16.874 Feature Identifiers & Effects Log Page:May Support 00:11:16.874 NVMe-MI Commands & Effects Log Page: May Support 00:11:16.874 Data Area 4 for Telemetry Log: Not Supported 00:11:16.874 Error Log Page Entries Supported: 1 00:11:16.874 Keep Alive: Not Supported 00:11:16.874 00:11:16.874 NVM Command Set Attributes 00:11:16.874 ========================== 00:11:16.874 Submission Queue Entry Size 00:11:16.874 Max: 64 00:11:16.874 Min: 64 00:11:16.874 Completion Queue Entry Size 00:11:16.874 Max: 16 00:11:16.874 Min: 16 00:11:16.874 Number of Namespaces: 256 00:11:16.874 Compare Command: Supported 00:11:16.874 Write Uncorrectable Command: Not Supported 00:11:16.874 Dataset Management Command: Supported 00:11:16.874 Write Zeroes Command: Supported 00:11:16.874 Set Features Save Field: Supported 00:11:16.874 Reservations: Not Supported 00:11:16.874 Timestamp: Supported 00:11:16.874 Copy: Supported 00:11:16.874 Volatile Write Cache: Present 00:11:16.874 Atomic Write Unit (Normal): 1 00:11:16.874 Atomic Write Unit (PFail): 1 00:11:16.874 Atomic Compare & Write Unit: 1 00:11:16.874 Fused Compare & Write: Not Supported 00:11:16.874 Scatter-Gather List 00:11:16.874 SGL Command Set: Supported 00:11:16.874 SGL Keyed: Not Supported 00:11:16.874 SGL Bit Bucket Descriptor: Not Supported 00:11:16.874 SGL Metadata Pointer: Not Supported 00:11:16.874 Oversized SGL: Not Supported 00:11:16.874 SGL Metadata Address: Not Supported 00:11:16.874 SGL Offset: Not Supported 00:11:16.874 Transport SGL Data Block: Not Supported 00:11:16.874 Replay Protected Memory Block: Not Supported 00:11:16.874 00:11:16.874 Firmware Slot Information 00:11:16.874 ========================= 00:11:16.874 Active slot: 1 00:11:16.874 Slot 1 Firmware Revision: 1.0 00:11:16.874 00:11:16.874 00:11:16.874 Commands Supported and Effects 00:11:16.874 ============================== 00:11:16.874 Admin Commands 00:11:16.874 -------------- 00:11:16.874 Delete I/O Submission Queue (00h): Supported 00:11:16.874 Create I/O Submission Queue (01h): Supported 00:11:16.874 Get Log Page (02h): Supported 00:11:16.874 Delete I/O Completion Queue (04h): Supported 00:11:16.874 Create I/O Completion Queue (05h): Supported 00:11:16.874 Identify (06h): Supported 00:11:16.874 Abort (08h): Supported 00:11:16.874 Set Features (09h): Supported 00:11:16.874 Get Features (0Ah): Supported 00:11:16.874 Asynchronous Event Request (0Ch): Supported 00:11:16.874 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:16.874 Directive Send (19h): Supported 00:11:16.874 Directive Receive (1Ah): Supported 00:11:16.874 Virtualization Management (1Ch): Supported 00:11:16.874 Doorbell Buffer Config (7Ch): Supported 00:11:16.874 Format NVM (80h): Supported LBA-Change 00:11:16.874 I/O Commands 00:11:16.874 ------------ 00:11:16.874 Flush (00h): Supported LBA-Change 00:11:16.874 Write (01h): Supported LBA-Change 00:11:16.874 Read (02h): Supported 00:11:16.874 Compare (05h): Supported 00:11:16.874 Write Zeroes (08h): Supported LBA-Change 00:11:16.874 Dataset Management (09h): Supported LBA-Change 00:11:16.874 Unknown (0Ch): Supported 00:11:16.874 Unknown (12h): Supported 00:11:16.874 Copy (19h): Supported LBA-Change 00:11:16.874 Unknown (1Dh): Supported LBA-Change 00:11:16.874 00:11:16.874 Error Log 00:11:16.874 ========= 00:11:16.874 00:11:16.874 Arbitration 00:11:16.874 =========== 00:11:16.874 Arbitration Burst: no limit 00:11:16.874 00:11:16.874 Power Management 00:11:16.874 ================ 00:11:16.874 Number of Power States: 1 00:11:16.874 Current Power State: Power State #0 00:11:16.874 Power State #0: 00:11:16.874 Max Power: 25.00 W 00:11:16.874 Non-Operational State: Operational 00:11:16.874 Entry Latency: 16 microseconds 00:11:16.874 Exit Latency: 4 microseconds 00:11:16.874 Relative Read Throughput: 0 00:11:16.874 Relative Read Latency: 0 00:11:16.874 Relative Write Throughput: 0 00:11:16.874 Relative Write Latency: 0 00:11:16.874 Idle Power: Not Reported 00:11:16.874 Active Power: Not Reported 00:11:16.874 Non-Operational Permissive Mode: Not Supported 00:11:16.874 00:11:16.874 Health Information 00:11:16.874 ================== 00:11:16.874 Critical Warnings: 00:11:16.874 Available Spare Space: OK 00:11:16.874 Temperature: OK 00:11:16.874 Device Reliability: OK 00:11:16.874 Read Only: No 00:11:16.874 Volatile Memory Backup: OK 00:11:16.874 Current Temperature: 323 Kelvin (50 Celsius) 00:11:16.874 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:16.874 Available Spare: 0% 00:11:16.874 Available Spare Threshold: 0% 00:11:16.874 Life Percentage Used: 0% 00:11:16.874 Data Units Read: 694 00:11:16.874 Data Units Written: 622 00:11:16.874 Host Read Commands: 40658 00:11:16.874 Host Write Commands: 40444 00:11:16.874 Controller Busy Time: 0 minutes 00:11:16.874 Power Cycles: 0 00:11:16.874 Power On Hours: 0 hours 00:11:16.874 Unsafe Shutdowns: 0 00:11:16.874 Unrecoverable Media Errors: 0 00:11:16.874 Lifetime Error Log Entries: 0 00:11:16.874 Warning Temperature Time: 0 minutes 00:11:16.874 Critical Temperature Time: 0 minutes 00:11:16.874 00:11:16.874 Number of Queues 00:11:16.874 ================ 00:11:16.874 Number of I/O Submission Queues: 64 00:11:16.874 Number of I/O Completion Queues: 64 00:11:16.874 00:11:16.874 ZNS Specific Controller Data 00:11:16.874 ============================ 00:11:16.874 Zone Append Size Limit: 0 00:11:16.874 00:11:16.874 00:11:16.874 Active Namespaces 00:11:16.874 ================= 00:11:16.874 Namespace ID:1 00:11:16.874 Error Recovery Timeout: Unlimited 00:11:16.874 Command Set Identifier: NVM (00h) 00:11:16.874 Deallocate: Supported 00:11:16.874 Deallocated/Unwritten Error: Supported 00:11:16.874 Deallocated Read Value: All 0x00 00:11:16.874 Deallocate in Write Zeroes: Not Supported 00:11:16.874 Deallocated Guard Field: 0xFFFF 00:11:16.874 Flush: Supported 00:11:16.874 Reservation: Not Supported 00:11:16.874 Metadata Transferred as: Separate Metadata Buffer 00:11:16.874 Namespace Sharing Capabilities: Private 00:11:16.874 Size (in LBAs): 1548666 (5GiB) 00:11:16.874 Capacity (in LBAs): 1548666 (5GiB) 00:11:16.874 Utilization (in LBAs): 1548666 (5GiB) 00:11:16.874 Thin Provisioning: Not Supported 00:11:16.874 Per-NS Atomic Units: No 00:11:16.874 Maximum Single Source Range Length: 128 00:11:16.874 Maximum Copy Length: 128 00:11:16.874 Maximum Source Range Count: 128 00:11:16.874 NGUID/EUI64 Never Reused: No 00:11:16.874 Namespace Write Protected: No 00:11:16.874 Number of LBA Formats: 8 00:11:16.874 Current LBA Format: LBA Format #07 00:11:16.874 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:16.874 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:16.874 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:16.874 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:16.874 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:16.874 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:16.874 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:16.874 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:16.874 00:11:16.874 NVM Specific Namespace Data 00:11:16.874 =========================== 00:11:16.874 Logical Block Storage Tag Mask: 0 00:11:16.874 Protection Information Capabilities: 00:11:16.874 16b Guard Protection Information Storage Tag Support: No 00:11:16.874 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:16.874 Storage Tag Check Read Support: No 00:11:16.874 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.874 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.874 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.874 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.874 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.874 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.874 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.874 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.874 ===================================================== 00:11:16.874 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:16.874 ===================================================== 00:11:16.875 Controller Capabilities/Features 00:11:16.875 ================================ 00:11:16.875 Vendor ID: 1b36 00:11:16.875 Subsystem Vendor ID: 1af4 00:11:16.875 Serial Number: 12341 00:11:16.875 Model Number: QEMU NVMe Ctrl 00:11:16.875 Firmware Version: 8.0.0 00:11:16.875 Recommended Arb Burst: 6 00:11:16.875 IEEE OUI Identifier: 00 54 52 00:11:16.875 Multi-path I/O 00:11:16.875 May have multiple subsystem ports: No 00:11:16.875 May have multiple controllers: No 00:11:16.875 Associated with SR-IOV VF: No 00:11:16.875 Max Data Transfer Size: 524288 00:11:16.875 Max Number of Namespaces: 256 00:11:16.875 Max Number of I/O Queues: 64 00:11:16.875 NVMe Specification Version (VS): 1.4 00:11:16.875 NVMe Specification Version (Identify): 1.4 00:11:16.875 Maximum Queue Entries: 2048 00:11:16.875 Contiguous Queues Required: Yes 00:11:16.875 Arbitration Mechanisms Supported 00:11:16.875 Weighted Round Robin: Not Supported 00:11:16.875 Vendor Specific: Not Supported 00:11:16.875 Reset Timeout: 7500 ms 00:11:16.875 Doorbell Stride: 4 bytes 00:11:16.875 NVM Subsystem Reset: Not Supported 00:11:16.875 Command Sets Supported 00:11:16.875 NVM Command Set: Supported 00:11:16.875 Boot Partition: Not Supported 00:11:16.875 Memory Page Size Minimum: 4096 bytes 00:11:16.875 Memory Page Size Maximum: 65536 bytes 00:11:16.875 Persistent Memory Region: Not Supported 00:11:16.875 Optional Asynchronous Events Supported 00:11:16.875 Namespace Attribute Notices: Supported 00:11:16.875 Firmware Activation Notices: Not Supported 00:11:16.875 ANA Change Notices: Not Supported 00:11:16.875 PLE Aggregate Log Change Notices: Not Supported 00:11:16.875 LBA Status Info Alert Notices: Not Supported 00:11:16.875 EGE Aggregate Log Change Notices: Not Supported 00:11:16.875 Normal NVM Subsystem Shutdown event: Not Supported 00:11:16.875 Zone Descriptor Change Notices: Not Supported 00:11:16.875 Discovery Log Change Notices: Not Supported 00:11:16.875 Controller Attributes 00:11:16.875 128-bit Host Identifier: Not Supported 00:11:16.875 Non-Operational Permissive Mode: Not Supported 00:11:16.875 NVM Sets: Not Supported 00:11:16.875 Read Recovery Levels: Not Supported 00:11:16.875 Endurance Groups: Not Supported 00:11:16.875 Predictable Latency Mode: Not Supported 00:11:16.875 Traffic Based Keep ALive: Not Supported 00:11:16.875 Namespace Granularity: Not Supported 00:11:16.875 SQ Associations: Not Supported 00:11:16.875 UUID List: Not Supported 00:11:16.875 Multi-Domain Subsystem: Not Supported 00:11:16.875 Fixed Capacity Management: Not Supported 00:11:16.875 Variable Capacity Management: Not Supported 00:11:16.875 Delete Endurance Group: Not Supported 00:11:16.875 Delete NVM Set: Not Supported 00:11:16.875 Extended LBA Formats Supported: Supported 00:11:16.875 Flexible Data Placement Supported: Not Supported 00:11:16.875 00:11:16.875 Controller Memory Buffer Support 00:11:16.875 ================================ 00:11:16.875 Supported: No 00:11:16.875 00:11:16.875 Persistent Memory Region Support 00:11:16.875 ================================ 00:11:16.875 Supported: No 00:11:16.875 00:11:16.875 Admin Command Set Attributes 00:11:16.875 ============================ 00:11:16.875 Security Send/Receive: Not Supported 00:11:16.875 Format NVM: Supported 00:11:16.875 Firmware Activate/Download: Not Supported 00:11:16.875 Namespace Management: Supported 00:11:16.875 Device Self-Test: Not Supported 00:11:16.875 Directives: Supported 00:11:16.875 NVMe-MI: Not Supported 00:11:16.875 Virtualization Management: Not Supported 00:11:16.875 Doorbell Buffer Config: Supported 00:11:16.875 Get LBA Status Capability: Not Supported 00:11:16.875 Command & Feature Lockdown Capability: Not Supported 00:11:16.875 Abort Command Limit: 4 00:11:16.875 Async Event Request Limit: 4 00:11:16.875 Number of Firmware Slots: N/A 00:11:16.875 Firmware Slot 1 Read-Only: N/A 00:11:16.875 Firmware Activation Without Reset: N/A 00:11:16.875 Multiple Update Detection Support: N/A 00:11:16.875 Firmware Update Granularity: No Information Provided 00:11:16.875 Per-Namespace SMART Log: Yes 00:11:16.875 Asymmetric Namespace Access Log Page: Not Supported 00:11:16.875 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:16.875 Command Effects Log Page: Supported 00:11:16.875 Get Log Page Extended Data: Supported 00:11:16.875 Telemetry Log Pages: Not Supported 00:11:16.875 Persistent Event Log Pages: Not Supported 00:11:16.875 Supported Log Pages Log Page: May Support 00:11:16.875 Commands Supported & Effects Log Page: Not Supported 00:11:16.875 Feature Identifiers & Effects Log Page:May Support 00:11:16.875 NVMe-MI Commands & Effects Log Page: May Support 00:11:16.875 Data Area 4 for Telemetry Log: Not Supported 00:11:16.875 Error Log Page Entries Supported: 1 00:11:16.875 Keep Alive: Not Supported 00:11:16.875 00:11:16.875 NVM Command Set Attributes 00:11:16.875 ========================== 00:11:16.875 Submission Queue Entry Size 00:11:16.875 Max: 64 00:11:16.875 Min: 64 00:11:16.875 Completion Queue Entry Size 00:11:16.875 Max: 16 00:11:16.875 Min: 16 00:11:16.875 Number of Namespaces: 256 00:11:16.875 Compare Command: Supported 00:11:16.875 Write Uncorrectable Command: Not Supported 00:11:16.875 Dataset Management Command: Supported 00:11:16.875 Write Zeroes Command: Supported 00:11:16.875 Set Features Save Field: Supported 00:11:16.875 Reservations: Not Supported 00:11:16.875 Timestamp: Supported 00:11:16.875 Copy: Supported 00:11:16.875 Volatile Write Cache: Present 00:11:16.875 Atomic Write Unit (Normal): 1 00:11:16.875 Atomic Write Unit (PFail): 1 00:11:16.875 Atomic Compare & Write Unit: 1 00:11:16.875 Fused Compare & Write: Not Supported 00:11:16.875 Scatter-Gather List 00:11:16.875 SGL Command Set: Supported 00:11:16.875 SGL Keyed: Not Supported 00:11:16.875 SGL Bit Bucket Descriptor: Not Supported 00:11:16.875 SGL Metadata Pointer: Not Supported 00:11:16.875 Oversized SGL: Not Supported 00:11:16.875 SGL Metadata Address: Not Supported 00:11:16.875 SGL Offset: Not Supported 00:11:16.875 Transport SGL Data Block: Not Supported 00:11:16.875 Replay Protected Memory Block: Not Supported 00:11:16.875 00:11:16.875 Firmware Slot Information 00:11:16.875 ========================= 00:11:16.875 Active slot: 1 00:11:16.875 Slot 1 Firmware Revision: 1.0 00:11:16.875 00:11:16.875 00:11:16.875 Commands Supported and Effects 00:11:16.875 ============================== 00:11:16.875 Admin Commands 00:11:16.875 -------------- 00:11:16.875 Delete I/O Submission Queue (00h): Supported 00:11:16.875 Create I/O Submission Queue (01h): Supported 00:11:16.875 Get Log Page (02h): Supported 00:11:16.875 Delete I/O Completion Queue (04h): Supported 00:11:16.875 Create I/O Completion Queue (05h): Supported 00:11:16.875 Identify (06h): Supported 00:11:16.875 Abort (08h): Supported 00:11:16.875 Set Features (09h): Supported 00:11:16.875 Get Features (0Ah): Supported 00:11:16.875 Asynchronous Event Request (0Ch): Supported 00:11:16.875 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:16.875 Directive Send (19h): Supported 00:11:16.875 Directive Receive (1Ah): Supported 00:11:16.875 Virtualization Management (1Ch): Supported 00:11:16.875 Doorbell Buffer Config (7Ch): Supported 00:11:16.875 Format NVM (80h): Supported LBA-Change 00:11:16.875 I/O Commands 00:11:16.875 ------------ 00:11:16.875 Flush (00h): Supported LBA-Change 00:11:16.875 Write (01h): Supported LBA-Change 00:11:16.875 Read (02h): Supported 00:11:16.875 Compare (05h): Supported 00:11:16.875 Write Zeroes (08h): Supported LBA-Change 00:11:16.875 Dataset Management (09h): Supported LBA-Change 00:11:16.875 Unknown (0Ch): Supported 00:11:16.875 Unknown (12h): Supported 00:11:16.875 Copy (19h): Supported LBA-Change 00:11:16.875 Unknown (1Dh): Supported LBA-Change 00:11:16.875 00:11:16.875 Error Log 00:11:16.875 ========= 00:11:16.875 00:11:16.875 Arbitration 00:11:16.875 =========== 00:11:16.875 Arbitration Burst: no limit 00:11:16.875 00:11:16.875 Power Management 00:11:16.875 ================ 00:11:16.875 Number of Power States: 1 00:11:16.875 Current Power State: Power State #0 00:11:16.875 Power State #0: 00:11:16.875 Max Power: 25.00 W 00:11:16.875 Non-Operational State: Operational 00:11:16.875 Entry Latency: 16 microseconds 00:11:16.875 Exit Latency: 4 microseconds 00:11:16.875 Relative Read Throughput: 0 00:11:16.875 Relative Read Latency: 0 00:11:16.875 Relative Write Throughput: 0 00:11:16.875 Relative Write Latency: 0 00:11:16.875 Idle Power: Not Reported 00:11:16.875 Active Power: Not Reported 00:11:16.875 Non-Operational Permissive Mode: Not Supported 00:11:16.875 00:11:16.875 Health Information 00:11:16.875 ================== 00:11:16.875 Critical Warnings: 00:11:16.875 Available Spare Space: OK 00:11:16.875 Temperature: OK 00:11:16.875 Device Reliability: OK 00:11:16.876 Read Only: No 00:11:16.876 Volatile Memory Backup: OK 00:11:16.876 Current Temperature: 323 Kelvin (50 Celsius) 00:11:16.876 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:16.876 Available Spare: 0% 00:11:16.876 Available Spare Threshold: 0% 00:11:16.876 Life Percentage Used: 0% 00:11:16.876 Data Units Read: 1086 00:11:16.876 Data Units Written: 950 00:11:16.876 Host Read Commands: 60171 00:11:16.876 Host Write Commands: 58911 00:11:16.876 Controller Busy Time: 0 minutes 00:11:16.876 Power Cycles: 0 00:11:16.876 Power On Hours: 0 hours 00:11:16.876 Unsafe Shutdowns: 0 00:11:16.876 Unrecoverable Media Errors: 0 00:11:16.876 Lifetime Error Log Entries: 0 00:11:16.876 Warning Temperature Time: 0 minutes 00:11:16.876 Critical Temperature Time: 0 minutes 00:11:16.876 00:11:16.876 Number of Queues 00:11:16.876 ================ 00:11:16.876 Number of I/O Submission Queues: 64 00:11:16.876 Number of I/O Completion Queues: 64 00:11:16.876 00:11:16.876 ZNS Specific Controller Data 00:11:16.876 ============================ 00:11:16.876 Zone Append Size Limit: 0 00:11:16.876 00:11:16.876 00:11:16.876 Active Namespaces 00:11:16.876 ================= 00:11:16.876 Namespace ID:1 00:11:16.876 Error Recovery Timeout: Unlimited 00:11:16.876 Command Set Identifier: NVM (00h) 00:11:16.876 Deallocate: Supported 00:11:16.876 Deallocated/Unwritten Error: Supported 00:11:16.876 Deallocated Read Value: All 0x00 00:11:16.876 Deallocate in Write Zeroes: Not Supported 00:11:16.876 Deallocated Guard Field: 0xFFFF 00:11:16.876 Flush: Supported 00:11:16.876 Reservation: Not Supported 00:11:16.876 Namespace Sharing Capabilities: Private 00:11:16.876 Size (in LBAs): 1310720 (5GiB) 00:11:16.876 Capacity (in LBAs): 1310720 (5GiB) 00:11:16.876 Utilization (in LBAs): 1310720 (5GiB) 00:11:16.876 Thin Provisioning: Not Supported 00:11:16.876 Per-NS Atomic Units: No 00:11:16.876 Maximum Single Source Range Length: 128 00:11:16.876 Maximum Copy Length: 128 00:11:16.876 Maximum Source Range Count: 128 00:11:16.876 NGUID/EUI64 Never Reused: No 00:11:16.876 Namespace Write Protected: No 00:11:16.876 Number of LBA Formats: 8 00:11:16.876 Current LBA Format: LBA Format #04 00:11:16.876 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:16.876 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:16.876 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:16.876 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:16.876 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:16.876 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:16.876 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:16.876 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:16.876 00:11:16.876 NVM Specific Namespace Data 00:11:16.876 =========================== 00:11:16.876 Logical Block Storage Tag Mask: 0 00:11:16.876 Protection Information Capabilities: 00:11:16.876 16b Guard Protection Information Storage Tag Support: No 00:11:16.876 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:16.876 Storage Tag Check Read Support: No 00:11:16.876 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.876 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.876 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.876 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.876 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.876 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.876 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.876 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.876 ===================================================== 00:11:16.876 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:16.876 ===================================================== 00:11:16.876 Controller Capabilities/Features 00:11:16.876 ================================ 00:11:16.876 Vendor ID: 1b36 00:11:16.876 Subsystem Vendor ID: 1af4 00:11:16.876 Serial Number: 12343 00:11:16.876 Model Number: QEMU NVMe Ctrl 00:11:16.876 Firmware Version: 8.0.0 00:11:16.876 Recommended Arb Burst: 6 00:11:16.876 IEEE OUI Identifier: 00 54 52 00:11:16.876 Multi-path I/O 00:11:16.876 May have multiple subsystem ports: No 00:11:16.876 May have multiple controllers: Yes 00:11:16.876 Associated with SR-IOV VF: No 00:11:16.876 Max Data Transfer Size: 524288 00:11:16.876 Max Number of Namespaces: 256 00:11:16.876 Max Number of I/O Queues: 64 00:11:16.876 NVMe Specification Version (VS): 1.4 00:11:16.876 NVMe Specification Version (Identify): 1.4 00:11:16.876 Maximum Queue Entries: 2048 00:11:16.876 Contiguous Queues Required: Yes 00:11:16.876 Arbitration Mechanisms Supported 00:11:16.876 Weighted Round Robin: Not Supported 00:11:16.876 Vendor Specific: Not Supported 00:11:16.876 Reset Timeout: 7500 ms 00:11:16.876 Doorbell Stride: 4 bytes 00:11:16.876 NVM Subsystem Reset: Not Supported 00:11:16.876 Command Sets Supported 00:11:16.876 NVM Command Set: Supported 00:11:16.876 Boot Partition: Not Supported 00:11:16.876 Memory Page Size Minimum: 4096 bytes 00:11:16.876 Memory Page Size Maximum: 65536 bytes 00:11:16.876 Persistent Memory Region: Not Supported 00:11:16.876 Optional Asynchronous Events Supported 00:11:16.876 Namespace Attribute Notices: Supported 00:11:16.876 Firmware Activation Notices: Not Supported 00:11:16.876 ANA Change Notices: Not Supported 00:11:16.876 PLE Aggregate Log Change Notices: Not Supported 00:11:16.876 LBA Status Info Alert Notices: Not Supported 00:11:16.876 EGE Aggregate Log Change Notices: Not Supported 00:11:16.876 Normal NVM Subsystem Shutdown event: Not Supported 00:11:16.876 Zone Descriptor Change Notices: Not Supported 00:11:16.876 Discovery Log Change Notices: Not Supported 00:11:16.876 Controller Attributes 00:11:16.876 128-bit Host Identifier: Not Supported 00:11:16.876 Non-Operational Permissive Mode: Not Supported 00:11:16.876 NVM Sets: Not Supported 00:11:16.876 Read Recovery Levels: Not Supported 00:11:16.876 Endurance Groups: Supported 00:11:16.876 Predictable Latency Mode: Not Supported 00:11:16.876 Traffic Based Keep ALive: Not Supported 00:11:16.876 Namespace Granularity: Not Supported 00:11:16.876 SQ Associations: Not Supported 00:11:16.876 UUID List: Not Supported 00:11:16.876 Multi-Domain Subsystem: Not Supported 00:11:16.876 Fixed Capacity Management: Not Supported 00:11:16.876 Variable Capacity Management: Not Supported 00:11:16.876 Delete Endurance Group: Not Supported 00:11:16.876 Delete NVM Set: Not Supported 00:11:16.876 Extended LBA Formats Supported: Supported 00:11:16.876 Flexible Data Placement Supported: Supported 00:11:16.876 00:11:16.876 Controller Memory Buffer Support 00:11:16.876 ================================ 00:11:16.876 Supported: No 00:11:16.876 00:11:16.876 Persistent Memory Region Support 00:11:16.876 ================================ 00:11:16.876 Supported: No 00:11:16.876 00:11:16.876 Admin Command Set Attributes 00:11:16.876 ============================ 00:11:16.876 Security Send/Receive: Not Supported 00:11:16.876 Format NVM: Supported 00:11:16.876 Firmware Activate/Download: Not Supported 00:11:16.876 Namespace Management: Supported 00:11:16.876 Device Self-Test: Not Supported 00:11:16.876 Directives: Supported 00:11:16.876 NVMe-MI: Not Supported 00:11:16.876 Virtualization Management: Not Supported 00:11:16.876 Doorbell Buffer Config: Supported 00:11:16.876 Get LBA Status Capability: Not Supported 00:11:16.876 Command & Feature Lockdown Capability: Not Supported 00:11:16.876 Abort Command Limit: 4 00:11:16.876 Async Event Request Limit: 4 00:11:16.876 Number of Firmware Slots: N/A 00:11:16.876 Firmware Slot 1 Read-Only: N/A 00:11:16.876 Firmware Activation Without Reset: N/A 00:11:16.876 Multiple Update Detection Support: N/A 00:11:16.876 Firmware Update Granularity: No Information Provided 00:11:16.876 Per-Namespace SMART Log: Yes 00:11:16.876 Asymmetric Namespace Access Log Page: Not Supported 00:11:16.876 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:16.876 Command Effects Log Page: Supported 00:11:16.876 Get Log Page Extended Data: Supported 00:11:16.876 Telemetry Log Pages: Not Supported 00:11:16.876 Persistent Event Log Pages: Not Supported 00:11:16.876 Supported Log Pages Log Page: May Support 00:11:16.876 Commands Supported & Effects Log Page: Not Supported 00:11:16.876 Feature Identifiers & Effects Log Page:May Support 00:11:16.876 NVMe-MI Commands & Effects Log Page: May Support 00:11:16.876 Data Area 4 for Telemetry Log: Not Supported 00:11:16.876 Error Log Page Entries Supported: 1 00:11:16.876 Keep Alive: Not Supported 00:11:16.876 00:11:16.876 NVM Command Set Attributes 00:11:16.877 ========================== 00:11:16.877 Submission Queue Entry Size 00:11:16.877 Max: 64 00:11:16.877 Min: 64 00:11:16.877 Completion Queue Entry Size 00:11:16.877 Max: 16 00:11:16.877 Min: 16 00:11:16.877 Number of Namespaces: 256 00:11:16.877 Compare Command: Supported 00:11:16.877 Write Uncorrectable Command: Not Supported 00:11:16.877 Dataset Management Command: Supported 00:11:16.877 Write Zeroes Command: Supported 00:11:16.877 Set Features Save Field: Supported 00:11:16.877 Reservations: Not Supported 00:11:16.877 Timestamp: Supported 00:11:16.877 Copy: Supported 00:11:16.877 Volatile Write Cache: Present 00:11:16.877 Atomic Write Unit (Normal): 1 00:11:16.877 Atomic Write Unit (PFail): 1 00:11:16.877 Atomic Compare & Write Unit: 1 00:11:16.877 Fused Compare & Write: Not Supported 00:11:16.877 Scatter-Gather List 00:11:16.877 SGL Command Set: Supported 00:11:16.877 SGL Keyed: Not Supported 00:11:16.877 SGL Bit Bucket Descriptor: Not Supported 00:11:16.877 SGL Metadata Pointer: Not Supported 00:11:16.877 Oversized SGL: Not Supported 00:11:16.877 SGL Metadata Address: Not Supported 00:11:16.877 SGL Offset: Not Supported 00:11:16.877 Transport SGL Data Block: Not Supported 00:11:16.877 Replay Protected Memory Block: Not Supported 00:11:16.877 00:11:16.877 Firmware Slot Information 00:11:16.877 ========================= 00:11:16.877 Active slot: 1 00:11:16.877 Slot 1 Firmware Revision: 1.0 00:11:16.877 00:11:16.877 00:11:16.877 Commands Supported and Effects 00:11:16.877 ============================== 00:11:16.877 Admin Commands 00:11:16.877 -------------- 00:11:16.877 Delete I/O Submission Queue (00h): Supported 00:11:16.877 Create I/O Submission Queue (01h): Supported 00:11:16.877 Get Log Page (02h): Supported 00:11:16.877 Delete I/O Completion Queue (04h): Supported 00:11:16.877 Create I/O Completion Queue (05h): Supported 00:11:16.877 Identify (06h): Supported 00:11:16.877 Abort (08h): Supported 00:11:16.877 Set Features (09h): Supported 00:11:16.877 Get Features (0Ah): Supported 00:11:16.877 Asynchronous Event Request (0Ch): Supported 00:11:16.877 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:16.877 Directive Send (19h): Supported 00:11:16.877 Directive Receive (1Ah): Supported 00:11:16.877 Virtualization Management (1Ch): Supported 00:11:16.877 Doorbell Buffer Config (7Ch): Supported 00:11:16.877 Format NVM (80h): Supported LBA-Change 00:11:16.877 I/O Commands 00:11:16.877 ------------ 00:11:16.877 Flush (00h): Supported LBA-Change 00:11:16.877 Write (01h): Supported LBA-Change 00:11:16.877 Read (02h): Supported 00:11:16.877 Compare (05h): Supported 00:11:16.877 Write Zeroes (08h): Supported LBA-Change 00:11:16.877 Dataset Management (09h): Supported LBA-Change 00:11:16.877 Unknown (0Ch): Supported 00:11:16.877 Unknown (12h): Supported 00:11:16.877 Copy (19h): Supported LBA-Change 00:11:16.877 Unknown (1Dh): Supported LBA-Change 00:11:16.877 00:11:16.877 Error Log 00:11:16.877 ========= 00:11:16.877 00:11:16.877 Arbitration 00:11:16.877 =========== 00:11:16.877 Arbitration Burst: no limit 00:11:16.877 00:11:16.877 Power Management 00:11:16.877 ================ 00:11:16.877 Number of Power States: 1 00:11:16.877 Current Power State: Power State #0 00:11:16.877 Power State #0: 00:11:16.877 Max Power: 25.00 W 00:11:16.877 Non-Operational State: Operational 00:11:16.877 Entry Latency: 16 microseconds 00:11:16.877 Exit Latency: 4 microseconds 00:11:16.877 Relative Read Throughput: 0 00:11:16.877 Relative Read Latency: 0 00:11:16.877 Relative Write Throughput: 0 00:11:16.877 Relative Write Latency: 0 00:11:16.877 Idle Power: Not Reported 00:11:16.877 Active Power: Not Reported 00:11:16.877 Non-Operational Permissive Mode: Not Supported 00:11:16.877 00:11:16.877 Health Information 00:11:16.877 ================== 00:11:16.877 Critical Warnings: 00:11:16.877 Available Spare Space: OK 00:11:16.877 Temperature: OK 00:11:16.877 Device Reliability: OK 00:11:16.877 Read Only: No 00:11:16.877 Volatile Memory Backup: OK 00:11:16.877 Current Temperature: 323 Kelvin (50 Celsius) 00:11:16.877 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:16.877 Available Spare: 0% 00:11:16.877 Available Spare Threshold: 0% 00:11:16.877 Life Percentage Used: [2024-12-05 19:29:35.764266] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 63078 terminated unexpected 00:11:16.877 [2024-12-05 19:29:35.766232] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 63078 terminated unexpected 00:11:16.877 [2024-12-05 19:29:35.767406] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 63078 terminated unexpected 00:11:16.877 0% 00:11:16.877 Data Units Read: 888 00:11:16.877 Data Units Written: 817 00:11:16.877 Host Read Commands: 42712 00:11:16.877 Host Write Commands: 42137 00:11:16.877 Controller Busy Time: 0 minutes 00:11:16.877 Power Cycles: 0 00:11:16.877 Power On Hours: 0 hours 00:11:16.877 Unsafe Shutdowns: 0 00:11:16.877 Unrecoverable Media Errors: 0 00:11:16.877 Lifetime Error Log Entries: 0 00:11:16.877 Warning Temperature Time: 0 minutes 00:11:16.877 Critical Temperature Time: 0 minutes 00:11:16.877 00:11:16.877 Number of Queues 00:11:16.877 ================ 00:11:16.877 Number of I/O Submission Queues: 64 00:11:16.877 Number of I/O Completion Queues: 64 00:11:16.877 00:11:16.877 ZNS Specific Controller Data 00:11:16.877 ============================ 00:11:16.877 Zone Append Size Limit: 0 00:11:16.877 00:11:16.877 00:11:16.877 Active Namespaces 00:11:16.877 ================= 00:11:16.877 Namespace ID:1 00:11:16.877 Error Recovery Timeout: Unlimited 00:11:16.877 Command Set Identifier: NVM (00h) 00:11:16.877 Deallocate: Supported 00:11:16.877 Deallocated/Unwritten Error: Supported 00:11:16.877 Deallocated Read Value: All 0x00 00:11:16.877 Deallocate in Write Zeroes: Not Supported 00:11:16.877 Deallocated Guard Field: 0xFFFF 00:11:16.877 Flush: Supported 00:11:16.877 Reservation: Not Supported 00:11:16.877 Namespace Sharing Capabilities: Multiple Controllers 00:11:16.877 Size (in LBAs): 262144 (1GiB) 00:11:16.877 Capacity (in LBAs): 262144 (1GiB) 00:11:16.877 Utilization (in LBAs): 262144 (1GiB) 00:11:16.877 Thin Provisioning: Not Supported 00:11:16.877 Per-NS Atomic Units: No 00:11:16.877 Maximum Single Source Range Length: 128 00:11:16.877 Maximum Copy Length: 128 00:11:16.877 Maximum Source Range Count: 128 00:11:16.877 NGUID/EUI64 Never Reused: No 00:11:16.877 Namespace Write Protected: No 00:11:16.877 Endurance group ID: 1 00:11:16.877 Number of LBA Formats: 8 00:11:16.877 Current LBA Format: LBA Format #04 00:11:16.877 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:16.877 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:16.877 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:16.877 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:16.877 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:16.877 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:16.877 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:16.877 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:16.877 00:11:16.877 Get Feature FDP: 00:11:16.877 ================ 00:11:16.877 Enabled: Yes 00:11:16.877 FDP configuration index: 0 00:11:16.877 00:11:16.877 FDP configurations log page 00:11:16.877 =========================== 00:11:16.877 Number of FDP configurations: 1 00:11:16.877 Version: 0 00:11:16.877 Size: 112 00:11:16.877 FDP Configuration Descriptor: 0 00:11:16.877 Descriptor Size: 96 00:11:16.877 Reclaim Group Identifier format: 2 00:11:16.877 FDP Volatile Write Cache: Not Present 00:11:16.877 FDP Configuration: Valid 00:11:16.877 Vendor Specific Size: 0 00:11:16.877 Number of Reclaim Groups: 2 00:11:16.877 Number of Recalim Unit Handles: 8 00:11:16.877 Max Placement Identifiers: 128 00:11:16.877 Number of Namespaces Suppprted: 256 00:11:16.877 Reclaim unit Nominal Size: 6000000 bytes 00:11:16.877 Estimated Reclaim Unit Time Limit: Not Reported 00:11:16.877 RUH Desc #000: RUH Type: Initially Isolated 00:11:16.877 RUH Desc #001: RUH Type: Initially Isolated 00:11:16.877 RUH Desc #002: RUH Type: Initially Isolated 00:11:16.877 RUH Desc #003: RUH Type: Initially Isolated 00:11:16.877 RUH Desc #004: RUH Type: Initially Isolated 00:11:16.877 RUH Desc #005: RUH Type: Initially Isolated 00:11:16.877 RUH Desc #006: RUH Type: Initially Isolated 00:11:16.877 RUH Desc #007: RUH Type: Initially Isolated 00:11:16.877 00:11:16.877 FDP reclaim unit handle usage log page 00:11:16.878 ====================================== 00:11:16.878 Number of Reclaim Unit Handles: 8 00:11:16.878 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:16.878 RUH Usage Desc #001: RUH Attributes: Unused 00:11:16.878 RUH Usage Desc #002: RUH Attributes: Unused 00:11:16.878 RUH Usage Desc #003: RUH Attributes: Unused 00:11:16.878 RUH Usage Desc #004: RUH Attributes: Unused 00:11:16.878 RUH Usage Desc #005: RUH Attributes: Unused 00:11:16.878 RUH Usage Desc #006: RUH Attributes: Unused 00:11:16.878 RUH Usage Desc #007: RUH Attributes: Unused 00:11:16.878 00:11:16.878 FDP statistics log page 00:11:16.878 ======================= 00:11:16.878 Host bytes with metadata written: 522493952 00:11:16.878 Medi[2024-12-05 19:29:35.770745] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 63078 terminated unexpected 00:11:16.878 a bytes with metadata written: 522551296 00:11:16.878 Media bytes erased: 0 00:11:16.878 00:11:16.878 FDP events log page 00:11:16.878 =================== 00:11:16.878 Number of FDP events: 0 00:11:16.878 00:11:16.878 NVM Specific Namespace Data 00:11:16.878 =========================== 00:11:16.878 Logical Block Storage Tag Mask: 0 00:11:16.878 Protection Information Capabilities: 00:11:16.878 16b Guard Protection Information Storage Tag Support: No 00:11:16.878 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:16.878 Storage Tag Check Read Support: No 00:11:16.878 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.878 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.878 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.878 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.878 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.878 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.878 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.878 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.878 ===================================================== 00:11:16.878 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:16.878 ===================================================== 00:11:16.878 Controller Capabilities/Features 00:11:16.878 ================================ 00:11:16.878 Vendor ID: 1b36 00:11:16.878 Subsystem Vendor ID: 1af4 00:11:16.878 Serial Number: 12342 00:11:16.878 Model Number: QEMU NVMe Ctrl 00:11:16.878 Firmware Version: 8.0.0 00:11:16.878 Recommended Arb Burst: 6 00:11:16.878 IEEE OUI Identifier: 00 54 52 00:11:16.878 Multi-path I/O 00:11:16.878 May have multiple subsystem ports: No 00:11:16.878 May have multiple controllers: No 00:11:16.878 Associated with SR-IOV VF: No 00:11:16.878 Max Data Transfer Size: 524288 00:11:16.878 Max Number of Namespaces: 256 00:11:16.878 Max Number of I/O Queues: 64 00:11:16.878 NVMe Specification Version (VS): 1.4 00:11:16.878 NVMe Specification Version (Identify): 1.4 00:11:16.878 Maximum Queue Entries: 2048 00:11:16.878 Contiguous Queues Required: Yes 00:11:16.878 Arbitration Mechanisms Supported 00:11:16.878 Weighted Round Robin: Not Supported 00:11:16.878 Vendor Specific: Not Supported 00:11:16.878 Reset Timeout: 7500 ms 00:11:16.878 Doorbell Stride: 4 bytes 00:11:16.878 NVM Subsystem Reset: Not Supported 00:11:16.878 Command Sets Supported 00:11:16.878 NVM Command Set: Supported 00:11:16.878 Boot Partition: Not Supported 00:11:16.878 Memory Page Size Minimum: 4096 bytes 00:11:16.878 Memory Page Size Maximum: 65536 bytes 00:11:16.878 Persistent Memory Region: Not Supported 00:11:16.878 Optional Asynchronous Events Supported 00:11:16.878 Namespace Attribute Notices: Supported 00:11:16.878 Firmware Activation Notices: Not Supported 00:11:16.878 ANA Change Notices: Not Supported 00:11:16.878 PLE Aggregate Log Change Notices: Not Supported 00:11:16.878 LBA Status Info Alert Notices: Not Supported 00:11:16.878 EGE Aggregate Log Change Notices: Not Supported 00:11:16.878 Normal NVM Subsystem Shutdown event: Not Supported 00:11:16.878 Zone Descriptor Change Notices: Not Supported 00:11:16.878 Discovery Log Change Notices: Not Supported 00:11:16.878 Controller Attributes 00:11:16.878 128-bit Host Identifier: Not Supported 00:11:16.878 Non-Operational Permissive Mode: Not Supported 00:11:16.878 NVM Sets: Not Supported 00:11:16.878 Read Recovery Levels: Not Supported 00:11:16.878 Endurance Groups: Not Supported 00:11:16.878 Predictable Latency Mode: Not Supported 00:11:16.878 Traffic Based Keep ALive: Not Supported 00:11:16.878 Namespace Granularity: Not Supported 00:11:16.878 SQ Associations: Not Supported 00:11:16.878 UUID List: Not Supported 00:11:16.878 Multi-Domain Subsystem: Not Supported 00:11:16.878 Fixed Capacity Management: Not Supported 00:11:16.878 Variable Capacity Management: Not Supported 00:11:16.878 Delete Endurance Group: Not Supported 00:11:16.878 Delete NVM Set: Not Supported 00:11:16.878 Extended LBA Formats Supported: Supported 00:11:16.878 Flexible Data Placement Supported: Not Supported 00:11:16.878 00:11:16.878 Controller Memory Buffer Support 00:11:16.878 ================================ 00:11:16.878 Supported: No 00:11:16.878 00:11:16.878 Persistent Memory Region Support 00:11:16.878 ================================ 00:11:16.878 Supported: No 00:11:16.878 00:11:16.878 Admin Command Set Attributes 00:11:16.878 ============================ 00:11:16.878 Security Send/Receive: Not Supported 00:11:16.878 Format NVM: Supported 00:11:16.878 Firmware Activate/Download: Not Supported 00:11:16.878 Namespace Management: Supported 00:11:16.878 Device Self-Test: Not Supported 00:11:16.878 Directives: Supported 00:11:16.878 NVMe-MI: Not Supported 00:11:16.878 Virtualization Management: Not Supported 00:11:16.878 Doorbell Buffer Config: Supported 00:11:16.878 Get LBA Status Capability: Not Supported 00:11:16.878 Command & Feature Lockdown Capability: Not Supported 00:11:16.878 Abort Command Limit: 4 00:11:16.878 Async Event Request Limit: 4 00:11:16.878 Number of Firmware Slots: N/A 00:11:16.878 Firmware Slot 1 Read-Only: N/A 00:11:16.878 Firmware Activation Without Reset: N/A 00:11:16.878 Multiple Update Detection Support: N/A 00:11:16.878 Firmware Update Granularity: No Information Provided 00:11:16.878 Per-Namespace SMART Log: Yes 00:11:16.878 Asymmetric Namespace Access Log Page: Not Supported 00:11:16.878 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:16.878 Command Effects Log Page: Supported 00:11:16.878 Get Log Page Extended Data: Supported 00:11:16.878 Telemetry Log Pages: Not Supported 00:11:16.878 Persistent Event Log Pages: Not Supported 00:11:16.878 Supported Log Pages Log Page: May Support 00:11:16.878 Commands Supported & Effects Log Page: Not Supported 00:11:16.878 Feature Identifiers & Effects Log Page:May Support 00:11:16.878 NVMe-MI Commands & Effects Log Page: May Support 00:11:16.878 Data Area 4 for Telemetry Log: Not Supported 00:11:16.878 Error Log Page Entries Supported: 1 00:11:16.878 Keep Alive: Not Supported 00:11:16.878 00:11:16.878 NVM Command Set Attributes 00:11:16.878 ========================== 00:11:16.878 Submission Queue Entry Size 00:11:16.878 Max: 64 00:11:16.878 Min: 64 00:11:16.878 Completion Queue Entry Size 00:11:16.878 Max: 16 00:11:16.878 Min: 16 00:11:16.878 Number of Namespaces: 256 00:11:16.878 Compare Command: Supported 00:11:16.878 Write Uncorrectable Command: Not Supported 00:11:16.878 Dataset Management Command: Supported 00:11:16.878 Write Zeroes Command: Supported 00:11:16.878 Set Features Save Field: Supported 00:11:16.878 Reservations: Not Supported 00:11:16.879 Timestamp: Supported 00:11:16.879 Copy: Supported 00:11:16.879 Volatile Write Cache: Present 00:11:16.879 Atomic Write Unit (Normal): 1 00:11:16.879 Atomic Write Unit (PFail): 1 00:11:16.879 Atomic Compare & Write Unit: 1 00:11:16.879 Fused Compare & Write: Not Supported 00:11:16.879 Scatter-Gather List 00:11:16.879 SGL Command Set: Supported 00:11:16.879 SGL Keyed: Not Supported 00:11:16.879 SGL Bit Bucket Descriptor: Not Supported 00:11:16.879 SGL Metadata Pointer: Not Supported 00:11:16.879 Oversized SGL: Not Supported 00:11:16.879 SGL Metadata Address: Not Supported 00:11:16.879 SGL Offset: Not Supported 00:11:16.879 Transport SGL Data Block: Not Supported 00:11:16.879 Replay Protected Memory Block: Not Supported 00:11:16.879 00:11:16.879 Firmware Slot Information 00:11:16.879 ========================= 00:11:16.879 Active slot: 1 00:11:16.879 Slot 1 Firmware Revision: 1.0 00:11:16.879 00:11:16.879 00:11:16.879 Commands Supported and Effects 00:11:16.879 ============================== 00:11:16.879 Admin Commands 00:11:16.879 -------------- 00:11:16.879 Delete I/O Submission Queue (00h): Supported 00:11:16.879 Create I/O Submission Queue (01h): Supported 00:11:16.879 Get Log Page (02h): Supported 00:11:16.879 Delete I/O Completion Queue (04h): Supported 00:11:16.879 Create I/O Completion Queue (05h): Supported 00:11:16.879 Identify (06h): Supported 00:11:16.879 Abort (08h): Supported 00:11:16.879 Set Features (09h): Supported 00:11:16.879 Get Features (0Ah): Supported 00:11:16.879 Asynchronous Event Request (0Ch): Supported 00:11:16.879 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:16.879 Directive Send (19h): Supported 00:11:16.879 Directive Receive (1Ah): Supported 00:11:16.879 Virtualization Management (1Ch): Supported 00:11:16.879 Doorbell Buffer Config (7Ch): Supported 00:11:16.879 Format NVM (80h): Supported LBA-Change 00:11:16.879 I/O Commands 00:11:16.879 ------------ 00:11:16.879 Flush (00h): Supported LBA-Change 00:11:16.879 Write (01h): Supported LBA-Change 00:11:16.879 Read (02h): Supported 00:11:16.879 Compare (05h): Supported 00:11:16.879 Write Zeroes (08h): Supported LBA-Change 00:11:16.879 Dataset Management (09h): Supported LBA-Change 00:11:16.879 Unknown (0Ch): Supported 00:11:16.879 Unknown (12h): Supported 00:11:16.879 Copy (19h): Supported LBA-Change 00:11:16.879 Unknown (1Dh): Supported LBA-Change 00:11:16.879 00:11:16.879 Error Log 00:11:16.879 ========= 00:11:16.879 00:11:16.879 Arbitration 00:11:16.879 =========== 00:11:16.879 Arbitration Burst: no limit 00:11:16.879 00:11:16.879 Power Management 00:11:16.879 ================ 00:11:16.879 Number of Power States: 1 00:11:16.879 Current Power State: Power State #0 00:11:16.879 Power State #0: 00:11:16.879 Max Power: 25.00 W 00:11:16.879 Non-Operational State: Operational 00:11:16.879 Entry Latency: 16 microseconds 00:11:16.879 Exit Latency: 4 microseconds 00:11:16.879 Relative Read Throughput: 0 00:11:16.879 Relative Read Latency: 0 00:11:16.879 Relative Write Throughput: 0 00:11:16.879 Relative Write Latency: 0 00:11:16.879 Idle Power: Not Reported 00:11:16.879 Active Power: Not Reported 00:11:16.879 Non-Operational Permissive Mode: Not Supported 00:11:16.879 00:11:16.879 Health Information 00:11:16.879 ================== 00:11:16.879 Critical Warnings: 00:11:16.879 Available Spare Space: OK 00:11:16.879 Temperature: OK 00:11:16.879 Device Reliability: OK 00:11:16.879 Read Only: No 00:11:16.879 Volatile Memory Backup: OK 00:11:16.879 Current Temperature: 323 Kelvin (50 Celsius) 00:11:16.879 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:16.879 Available Spare: 0% 00:11:16.879 Available Spare Threshold: 0% 00:11:16.879 Life Percentage Used: 0% 00:11:16.879 Data Units Read: 2294 00:11:16.879 Data Units Written: 2081 00:11:16.879 Host Read Commands: 124781 00:11:16.879 Host Write Commands: 123050 00:11:16.879 Controller Busy Time: 0 minutes 00:11:16.879 Power Cycles: 0 00:11:16.879 Power On Hours: 0 hours 00:11:16.879 Unsafe Shutdowns: 0 00:11:16.879 Unrecoverable Media Errors: 0 00:11:16.879 Lifetime Error Log Entries: 0 00:11:16.879 Warning Temperature Time: 0 minutes 00:11:16.879 Critical Temperature Time: 0 minutes 00:11:16.879 00:11:16.879 Number of Queues 00:11:16.879 ================ 00:11:16.879 Number of I/O Submission Queues: 64 00:11:16.879 Number of I/O Completion Queues: 64 00:11:16.879 00:11:16.879 ZNS Specific Controller Data 00:11:16.879 ============================ 00:11:16.879 Zone Append Size Limit: 0 00:11:16.879 00:11:16.879 00:11:16.879 Active Namespaces 00:11:16.879 ================= 00:11:16.879 Namespace ID:1 00:11:16.879 Error Recovery Timeout: Unlimited 00:11:16.879 Command Set Identifier: NVM (00h) 00:11:16.879 Deallocate: Supported 00:11:16.879 Deallocated/Unwritten Error: Supported 00:11:16.879 Deallocated Read Value: All 0x00 00:11:16.879 Deallocate in Write Zeroes: Not Supported 00:11:16.879 Deallocated Guard Field: 0xFFFF 00:11:16.879 Flush: Supported 00:11:16.879 Reservation: Not Supported 00:11:16.879 Namespace Sharing Capabilities: Private 00:11:16.879 Size (in LBAs): 1048576 (4GiB) 00:11:16.879 Capacity (in LBAs): 1048576 (4GiB) 00:11:16.879 Utilization (in LBAs): 1048576 (4GiB) 00:11:16.879 Thin Provisioning: Not Supported 00:11:16.879 Per-NS Atomic Units: No 00:11:16.879 Maximum Single Source Range Length: 128 00:11:16.879 Maximum Copy Length: 128 00:11:16.879 Maximum Source Range Count: 128 00:11:16.879 NGUID/EUI64 Never Reused: No 00:11:16.879 Namespace Write Protected: No 00:11:16.879 Number of LBA Formats: 8 00:11:16.879 Current LBA Format: LBA Format #04 00:11:16.879 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:16.879 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:16.879 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:16.879 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:16.879 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:16.879 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:16.879 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:16.879 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:16.879 00:11:16.879 NVM Specific Namespace Data 00:11:16.879 =========================== 00:11:16.879 Logical Block Storage Tag Mask: 0 00:11:16.879 Protection Information Capabilities: 00:11:16.879 16b Guard Protection Information Storage Tag Support: No 00:11:16.879 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:16.879 Storage Tag Check Read Support: No 00:11:16.879 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.879 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.879 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.879 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.879 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.879 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.879 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.879 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.879 Namespace ID:2 00:11:16.879 Error Recovery Timeout: Unlimited 00:11:16.879 Command Set Identifier: NVM (00h) 00:11:16.879 Deallocate: Supported 00:11:16.879 Deallocated/Unwritten Error: Supported 00:11:16.879 Deallocated Read Value: All 0x00 00:11:16.879 Deallocate in Write Zeroes: Not Supported 00:11:16.879 Deallocated Guard Field: 0xFFFF 00:11:16.879 Flush: Supported 00:11:16.879 Reservation: Not Supported 00:11:16.879 Namespace Sharing Capabilities: Private 00:11:16.879 Size (in LBAs): 1048576 (4GiB) 00:11:16.879 Capacity (in LBAs): 1048576 (4GiB) 00:11:16.879 Utilization (in LBAs): 1048576 (4GiB) 00:11:16.879 Thin Provisioning: Not Supported 00:11:16.880 Per-NS Atomic Units: No 00:11:16.880 Maximum Single Source Range Length: 128 00:11:16.880 Maximum Copy Length: 128 00:11:16.880 Maximum Source Range Count: 128 00:11:16.880 NGUID/EUI64 Never Reused: No 00:11:16.880 Namespace Write Protected: No 00:11:16.880 Number of LBA Formats: 8 00:11:16.880 Current LBA Format: LBA Format #04 00:11:16.880 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:16.880 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:16.880 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:16.880 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:16.880 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:16.880 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:16.880 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:16.880 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:16.880 00:11:16.880 NVM Specific Namespace Data 00:11:16.880 =========================== 00:11:16.880 Logical Block Storage Tag Mask: 0 00:11:16.880 Protection Information Capabilities: 00:11:16.880 16b Guard Protection Information Storage Tag Support: No 00:11:16.880 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:16.880 Storage Tag Check Read Support: No 00:11:16.880 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Namespace ID:3 00:11:16.880 Error Recovery Timeout: Unlimited 00:11:16.880 Command Set Identifier: NVM (00h) 00:11:16.880 Deallocate: Supported 00:11:16.880 Deallocated/Unwritten Error: Supported 00:11:16.880 Deallocated Read Value: All 0x00 00:11:16.880 Deallocate in Write Zeroes: Not Supported 00:11:16.880 Deallocated Guard Field: 0xFFFF 00:11:16.880 Flush: Supported 00:11:16.880 Reservation: Not Supported 00:11:16.880 Namespace Sharing Capabilities: Private 00:11:16.880 Size (in LBAs): 1048576 (4GiB) 00:11:16.880 Capacity (in LBAs): 1048576 (4GiB) 00:11:16.880 Utilization (in LBAs): 1048576 (4GiB) 00:11:16.880 Thin Provisioning: Not Supported 00:11:16.880 Per-NS Atomic Units: No 00:11:16.880 Maximum Single Source Range Length: 128 00:11:16.880 Maximum Copy Length: 128 00:11:16.880 Maximum Source Range Count: 128 00:11:16.880 NGUID/EUI64 Never Reused: No 00:11:16.880 Namespace Write Protected: No 00:11:16.880 Number of LBA Formats: 8 00:11:16.880 Current LBA Format: LBA Format #04 00:11:16.880 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:16.880 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:16.880 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:16.880 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:16.880 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:16.880 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:16.880 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:16.880 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:16.880 00:11:16.880 NVM Specific Namespace Data 00:11:16.880 =========================== 00:11:16.880 Logical Block Storage Tag Mask: 0 00:11:16.880 Protection Information Capabilities: 00:11:16.880 16b Guard Protection Information Storage Tag Support: No 00:11:16.880 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:16.880 Storage Tag Check Read Support: No 00:11:16.880 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:16.880 19:29:35 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:16.880 19:29:35 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:11:17.139 ===================================================== 00:11:17.139 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:17.139 ===================================================== 00:11:17.139 Controller Capabilities/Features 00:11:17.139 ================================ 00:11:17.139 Vendor ID: 1b36 00:11:17.139 Subsystem Vendor ID: 1af4 00:11:17.139 Serial Number: 12340 00:11:17.139 Model Number: QEMU NVMe Ctrl 00:11:17.139 Firmware Version: 8.0.0 00:11:17.139 Recommended Arb Burst: 6 00:11:17.139 IEEE OUI Identifier: 00 54 52 00:11:17.139 Multi-path I/O 00:11:17.139 May have multiple subsystem ports: No 00:11:17.139 May have multiple controllers: No 00:11:17.139 Associated with SR-IOV VF: No 00:11:17.139 Max Data Transfer Size: 524288 00:11:17.139 Max Number of Namespaces: 256 00:11:17.139 Max Number of I/O Queues: 64 00:11:17.139 NVMe Specification Version (VS): 1.4 00:11:17.139 NVMe Specification Version (Identify): 1.4 00:11:17.139 Maximum Queue Entries: 2048 00:11:17.139 Contiguous Queues Required: Yes 00:11:17.140 Arbitration Mechanisms Supported 00:11:17.140 Weighted Round Robin: Not Supported 00:11:17.140 Vendor Specific: Not Supported 00:11:17.140 Reset Timeout: 7500 ms 00:11:17.140 Doorbell Stride: 4 bytes 00:11:17.140 NVM Subsystem Reset: Not Supported 00:11:17.140 Command Sets Supported 00:11:17.140 NVM Command Set: Supported 00:11:17.140 Boot Partition: Not Supported 00:11:17.140 Memory Page Size Minimum: 4096 bytes 00:11:17.140 Memory Page Size Maximum: 65536 bytes 00:11:17.140 Persistent Memory Region: Not Supported 00:11:17.140 Optional Asynchronous Events Supported 00:11:17.140 Namespace Attribute Notices: Supported 00:11:17.140 Firmware Activation Notices: Not Supported 00:11:17.140 ANA Change Notices: Not Supported 00:11:17.140 PLE Aggregate Log Change Notices: Not Supported 00:11:17.140 LBA Status Info Alert Notices: Not Supported 00:11:17.140 EGE Aggregate Log Change Notices: Not Supported 00:11:17.140 Normal NVM Subsystem Shutdown event: Not Supported 00:11:17.140 Zone Descriptor Change Notices: Not Supported 00:11:17.140 Discovery Log Change Notices: Not Supported 00:11:17.140 Controller Attributes 00:11:17.140 128-bit Host Identifier: Not Supported 00:11:17.140 Non-Operational Permissive Mode: Not Supported 00:11:17.140 NVM Sets: Not Supported 00:11:17.140 Read Recovery Levels: Not Supported 00:11:17.140 Endurance Groups: Not Supported 00:11:17.140 Predictable Latency Mode: Not Supported 00:11:17.140 Traffic Based Keep ALive: Not Supported 00:11:17.140 Namespace Granularity: Not Supported 00:11:17.140 SQ Associations: Not Supported 00:11:17.140 UUID List: Not Supported 00:11:17.140 Multi-Domain Subsystem: Not Supported 00:11:17.140 Fixed Capacity Management: Not Supported 00:11:17.140 Variable Capacity Management: Not Supported 00:11:17.140 Delete Endurance Group: Not Supported 00:11:17.140 Delete NVM Set: Not Supported 00:11:17.140 Extended LBA Formats Supported: Supported 00:11:17.140 Flexible Data Placement Supported: Not Supported 00:11:17.140 00:11:17.140 Controller Memory Buffer Support 00:11:17.140 ================================ 00:11:17.140 Supported: No 00:11:17.140 00:11:17.140 Persistent Memory Region Support 00:11:17.140 ================================ 00:11:17.140 Supported: No 00:11:17.140 00:11:17.140 Admin Command Set Attributes 00:11:17.140 ============================ 00:11:17.140 Security Send/Receive: Not Supported 00:11:17.140 Format NVM: Supported 00:11:17.140 Firmware Activate/Download: Not Supported 00:11:17.140 Namespace Management: Supported 00:11:17.140 Device Self-Test: Not Supported 00:11:17.140 Directives: Supported 00:11:17.140 NVMe-MI: Not Supported 00:11:17.140 Virtualization Management: Not Supported 00:11:17.140 Doorbell Buffer Config: Supported 00:11:17.140 Get LBA Status Capability: Not Supported 00:11:17.140 Command & Feature Lockdown Capability: Not Supported 00:11:17.140 Abort Command Limit: 4 00:11:17.140 Async Event Request Limit: 4 00:11:17.140 Number of Firmware Slots: N/A 00:11:17.140 Firmware Slot 1 Read-Only: N/A 00:11:17.140 Firmware Activation Without Reset: N/A 00:11:17.140 Multiple Update Detection Support: N/A 00:11:17.140 Firmware Update Granularity: No Information Provided 00:11:17.140 Per-Namespace SMART Log: Yes 00:11:17.140 Asymmetric Namespace Access Log Page: Not Supported 00:11:17.140 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:11:17.140 Command Effects Log Page: Supported 00:11:17.140 Get Log Page Extended Data: Supported 00:11:17.140 Telemetry Log Pages: Not Supported 00:11:17.140 Persistent Event Log Pages: Not Supported 00:11:17.140 Supported Log Pages Log Page: May Support 00:11:17.140 Commands Supported & Effects Log Page: Not Supported 00:11:17.140 Feature Identifiers & Effects Log Page:May Support 00:11:17.140 NVMe-MI Commands & Effects Log Page: May Support 00:11:17.140 Data Area 4 for Telemetry Log: Not Supported 00:11:17.140 Error Log Page Entries Supported: 1 00:11:17.140 Keep Alive: Not Supported 00:11:17.140 00:11:17.140 NVM Command Set Attributes 00:11:17.140 ========================== 00:11:17.140 Submission Queue Entry Size 00:11:17.140 Max: 64 00:11:17.140 Min: 64 00:11:17.140 Completion Queue Entry Size 00:11:17.140 Max: 16 00:11:17.140 Min: 16 00:11:17.140 Number of Namespaces: 256 00:11:17.140 Compare Command: Supported 00:11:17.140 Write Uncorrectable Command: Not Supported 00:11:17.140 Dataset Management Command: Supported 00:11:17.140 Write Zeroes Command: Supported 00:11:17.140 Set Features Save Field: Supported 00:11:17.140 Reservations: Not Supported 00:11:17.140 Timestamp: Supported 00:11:17.140 Copy: Supported 00:11:17.140 Volatile Write Cache: Present 00:11:17.140 Atomic Write Unit (Normal): 1 00:11:17.140 Atomic Write Unit (PFail): 1 00:11:17.140 Atomic Compare & Write Unit: 1 00:11:17.140 Fused Compare & Write: Not Supported 00:11:17.140 Scatter-Gather List 00:11:17.140 SGL Command Set: Supported 00:11:17.140 SGL Keyed: Not Supported 00:11:17.140 SGL Bit Bucket Descriptor: Not Supported 00:11:17.140 SGL Metadata Pointer: Not Supported 00:11:17.140 Oversized SGL: Not Supported 00:11:17.140 SGL Metadata Address: Not Supported 00:11:17.140 SGL Offset: Not Supported 00:11:17.140 Transport SGL Data Block: Not Supported 00:11:17.140 Replay Protected Memory Block: Not Supported 00:11:17.140 00:11:17.140 Firmware Slot Information 00:11:17.140 ========================= 00:11:17.140 Active slot: 1 00:11:17.140 Slot 1 Firmware Revision: 1.0 00:11:17.140 00:11:17.140 00:11:17.140 Commands Supported and Effects 00:11:17.140 ============================== 00:11:17.140 Admin Commands 00:11:17.140 -------------- 00:11:17.140 Delete I/O Submission Queue (00h): Supported 00:11:17.140 Create I/O Submission Queue (01h): Supported 00:11:17.140 Get Log Page (02h): Supported 00:11:17.140 Delete I/O Completion Queue (04h): Supported 00:11:17.140 Create I/O Completion Queue (05h): Supported 00:11:17.140 Identify (06h): Supported 00:11:17.140 Abort (08h): Supported 00:11:17.140 Set Features (09h): Supported 00:11:17.140 Get Features (0Ah): Supported 00:11:17.140 Asynchronous Event Request (0Ch): Supported 00:11:17.140 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:17.140 Directive Send (19h): Supported 00:11:17.140 Directive Receive (1Ah): Supported 00:11:17.140 Virtualization Management (1Ch): Supported 00:11:17.140 Doorbell Buffer Config (7Ch): Supported 00:11:17.140 Format NVM (80h): Supported LBA-Change 00:11:17.140 I/O Commands 00:11:17.140 ------------ 00:11:17.140 Flush (00h): Supported LBA-Change 00:11:17.140 Write (01h): Supported LBA-Change 00:11:17.140 Read (02h): Supported 00:11:17.140 Compare (05h): Supported 00:11:17.140 Write Zeroes (08h): Supported LBA-Change 00:11:17.140 Dataset Management (09h): Supported LBA-Change 00:11:17.140 Unknown (0Ch): Supported 00:11:17.140 Unknown (12h): Supported 00:11:17.140 Copy (19h): Supported LBA-Change 00:11:17.140 Unknown (1Dh): Supported LBA-Change 00:11:17.140 00:11:17.140 Error Log 00:11:17.140 ========= 00:11:17.140 00:11:17.140 Arbitration 00:11:17.140 =========== 00:11:17.140 Arbitration Burst: no limit 00:11:17.140 00:11:17.140 Power Management 00:11:17.140 ================ 00:11:17.141 Number of Power States: 1 00:11:17.141 Current Power State: Power State #0 00:11:17.141 Power State #0: 00:11:17.141 Max Power: 25.00 W 00:11:17.141 Non-Operational State: Operational 00:11:17.141 Entry Latency: 16 microseconds 00:11:17.141 Exit Latency: 4 microseconds 00:11:17.141 Relative Read Throughput: 0 00:11:17.141 Relative Read Latency: 0 00:11:17.141 Relative Write Throughput: 0 00:11:17.141 Relative Write Latency: 0 00:11:17.141 Idle Power: Not Reported 00:11:17.141 Active Power: Not Reported 00:11:17.141 Non-Operational Permissive Mode: Not Supported 00:11:17.141 00:11:17.141 Health Information 00:11:17.141 ================== 00:11:17.141 Critical Warnings: 00:11:17.141 Available Spare Space: OK 00:11:17.141 Temperature: OK 00:11:17.141 Device Reliability: OK 00:11:17.141 Read Only: No 00:11:17.141 Volatile Memory Backup: OK 00:11:17.141 Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.141 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:17.141 Available Spare: 0% 00:11:17.141 Available Spare Threshold: 0% 00:11:17.141 Life Percentage Used: 0% 00:11:17.141 Data Units Read: 694 00:11:17.141 Data Units Written: 622 00:11:17.141 Host Read Commands: 40658 00:11:17.141 Host Write Commands: 40444 00:11:17.141 Controller Busy Time: 0 minutes 00:11:17.141 Power Cycles: 0 00:11:17.141 Power On Hours: 0 hours 00:11:17.141 Unsafe Shutdowns: 0 00:11:17.141 Unrecoverable Media Errors: 0 00:11:17.141 Lifetime Error Log Entries: 0 00:11:17.141 Warning Temperature Time: 0 minutes 00:11:17.141 Critical Temperature Time: 0 minutes 00:11:17.141 00:11:17.141 Number of Queues 00:11:17.141 ================ 00:11:17.141 Number of I/O Submission Queues: 64 00:11:17.141 Number of I/O Completion Queues: 64 00:11:17.141 00:11:17.141 ZNS Specific Controller Data 00:11:17.141 ============================ 00:11:17.141 Zone Append Size Limit: 0 00:11:17.141 00:11:17.141 00:11:17.141 Active Namespaces 00:11:17.141 ================= 00:11:17.141 Namespace ID:1 00:11:17.141 Error Recovery Timeout: Unlimited 00:11:17.141 Command Set Identifier: NVM (00h) 00:11:17.141 Deallocate: Supported 00:11:17.141 Deallocated/Unwritten Error: Supported 00:11:17.141 Deallocated Read Value: All 0x00 00:11:17.141 Deallocate in Write Zeroes: Not Supported 00:11:17.141 Deallocated Guard Field: 0xFFFF 00:11:17.141 Flush: Supported 00:11:17.141 Reservation: Not Supported 00:11:17.141 Metadata Transferred as: Separate Metadata Buffer 00:11:17.141 Namespace Sharing Capabilities: Private 00:11:17.141 Size (in LBAs): 1548666 (5GiB) 00:11:17.141 Capacity (in LBAs): 1548666 (5GiB) 00:11:17.141 Utilization (in LBAs): 1548666 (5GiB) 00:11:17.141 Thin Provisioning: Not Supported 00:11:17.141 Per-NS Atomic Units: No 00:11:17.141 Maximum Single Source Range Length: 128 00:11:17.141 Maximum Copy Length: 128 00:11:17.141 Maximum Source Range Count: 128 00:11:17.141 NGUID/EUI64 Never Reused: No 00:11:17.141 Namespace Write Protected: No 00:11:17.141 Number of LBA Formats: 8 00:11:17.141 Current LBA Format: LBA Format #07 00:11:17.141 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.141 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.141 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.141 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.141 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.141 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.141 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.141 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.141 00:11:17.141 NVM Specific Namespace Data 00:11:17.141 =========================== 00:11:17.141 Logical Block Storage Tag Mask: 0 00:11:17.141 Protection Information Capabilities: 00:11:17.141 16b Guard Protection Information Storage Tag Support: No 00:11:17.141 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:17.141 Storage Tag Check Read Support: No 00:11:17.141 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.141 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.141 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.141 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.141 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.141 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.141 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.141 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.141 19:29:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:17.141 19:29:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:11:17.400 ===================================================== 00:11:17.400 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:17.400 ===================================================== 00:11:17.400 Controller Capabilities/Features 00:11:17.400 ================================ 00:11:17.400 Vendor ID: 1b36 00:11:17.400 Subsystem Vendor ID: 1af4 00:11:17.400 Serial Number: 12341 00:11:17.400 Model Number: QEMU NVMe Ctrl 00:11:17.400 Firmware Version: 8.0.0 00:11:17.400 Recommended Arb Burst: 6 00:11:17.400 IEEE OUI Identifier: 00 54 52 00:11:17.400 Multi-path I/O 00:11:17.400 May have multiple subsystem ports: No 00:11:17.400 May have multiple controllers: No 00:11:17.400 Associated with SR-IOV VF: No 00:11:17.400 Max Data Transfer Size: 524288 00:11:17.400 Max Number of Namespaces: 256 00:11:17.400 Max Number of I/O Queues: 64 00:11:17.400 NVMe Specification Version (VS): 1.4 00:11:17.400 NVMe Specification Version (Identify): 1.4 00:11:17.400 Maximum Queue Entries: 2048 00:11:17.400 Contiguous Queues Required: Yes 00:11:17.400 Arbitration Mechanisms Supported 00:11:17.400 Weighted Round Robin: Not Supported 00:11:17.400 Vendor Specific: Not Supported 00:11:17.400 Reset Timeout: 7500 ms 00:11:17.400 Doorbell Stride: 4 bytes 00:11:17.400 NVM Subsystem Reset: Not Supported 00:11:17.400 Command Sets Supported 00:11:17.400 NVM Command Set: Supported 00:11:17.400 Boot Partition: Not Supported 00:11:17.400 Memory Page Size Minimum: 4096 bytes 00:11:17.400 Memory Page Size Maximum: 65536 bytes 00:11:17.400 Persistent Memory Region: Not Supported 00:11:17.400 Optional Asynchronous Events Supported 00:11:17.400 Namespace Attribute Notices: Supported 00:11:17.400 Firmware Activation Notices: Not Supported 00:11:17.400 ANA Change Notices: Not Supported 00:11:17.400 PLE Aggregate Log Change Notices: Not Supported 00:11:17.400 LBA Status Info Alert Notices: Not Supported 00:11:17.400 EGE Aggregate Log Change Notices: Not Supported 00:11:17.400 Normal NVM Subsystem Shutdown event: Not Supported 00:11:17.400 Zone Descriptor Change Notices: Not Supported 00:11:17.400 Discovery Log Change Notices: Not Supported 00:11:17.400 Controller Attributes 00:11:17.400 128-bit Host Identifier: Not Supported 00:11:17.400 Non-Operational Permissive Mode: Not Supported 00:11:17.400 NVM Sets: Not Supported 00:11:17.400 Read Recovery Levels: Not Supported 00:11:17.400 Endurance Groups: Not Supported 00:11:17.400 Predictable Latency Mode: Not Supported 00:11:17.400 Traffic Based Keep ALive: Not Supported 00:11:17.400 Namespace Granularity: Not Supported 00:11:17.400 SQ Associations: Not Supported 00:11:17.400 UUID List: Not Supported 00:11:17.400 Multi-Domain Subsystem: Not Supported 00:11:17.400 Fixed Capacity Management: Not Supported 00:11:17.400 Variable Capacity Management: Not Supported 00:11:17.400 Delete Endurance Group: Not Supported 00:11:17.400 Delete NVM Set: Not Supported 00:11:17.400 Extended LBA Formats Supported: Supported 00:11:17.400 Flexible Data Placement Supported: Not Supported 00:11:17.400 00:11:17.400 Controller Memory Buffer Support 00:11:17.400 ================================ 00:11:17.400 Supported: No 00:11:17.401 00:11:17.401 Persistent Memory Region Support 00:11:17.401 ================================ 00:11:17.401 Supported: No 00:11:17.401 00:11:17.401 Admin Command Set Attributes 00:11:17.401 ============================ 00:11:17.401 Security Send/Receive: Not Supported 00:11:17.401 Format NVM: Supported 00:11:17.401 Firmware Activate/Download: Not Supported 00:11:17.401 Namespace Management: Supported 00:11:17.401 Device Self-Test: Not Supported 00:11:17.401 Directives: Supported 00:11:17.401 NVMe-MI: Not Supported 00:11:17.401 Virtualization Management: Not Supported 00:11:17.401 Doorbell Buffer Config: Supported 00:11:17.401 Get LBA Status Capability: Not Supported 00:11:17.401 Command & Feature Lockdown Capability: Not Supported 00:11:17.401 Abort Command Limit: 4 00:11:17.401 Async Event Request Limit: 4 00:11:17.401 Number of Firmware Slots: N/A 00:11:17.401 Firmware Slot 1 Read-Only: N/A 00:11:17.401 Firmware Activation Without Reset: N/A 00:11:17.401 Multiple Update Detection Support: N/A 00:11:17.401 Firmware Update Granularity: No Information Provided 00:11:17.401 Per-Namespace SMART Log: Yes 00:11:17.401 Asymmetric Namespace Access Log Page: Not Supported 00:11:17.401 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:11:17.401 Command Effects Log Page: Supported 00:11:17.401 Get Log Page Extended Data: Supported 00:11:17.401 Telemetry Log Pages: Not Supported 00:11:17.401 Persistent Event Log Pages: Not Supported 00:11:17.401 Supported Log Pages Log Page: May Support 00:11:17.401 Commands Supported & Effects Log Page: Not Supported 00:11:17.401 Feature Identifiers & Effects Log Page:May Support 00:11:17.401 NVMe-MI Commands & Effects Log Page: May Support 00:11:17.401 Data Area 4 for Telemetry Log: Not Supported 00:11:17.401 Error Log Page Entries Supported: 1 00:11:17.401 Keep Alive: Not Supported 00:11:17.401 00:11:17.401 NVM Command Set Attributes 00:11:17.401 ========================== 00:11:17.401 Submission Queue Entry Size 00:11:17.401 Max: 64 00:11:17.401 Min: 64 00:11:17.401 Completion Queue Entry Size 00:11:17.401 Max: 16 00:11:17.401 Min: 16 00:11:17.401 Number of Namespaces: 256 00:11:17.401 Compare Command: Supported 00:11:17.401 Write Uncorrectable Command: Not Supported 00:11:17.401 Dataset Management Command: Supported 00:11:17.401 Write Zeroes Command: Supported 00:11:17.401 Set Features Save Field: Supported 00:11:17.401 Reservations: Not Supported 00:11:17.401 Timestamp: Supported 00:11:17.401 Copy: Supported 00:11:17.401 Volatile Write Cache: Present 00:11:17.401 Atomic Write Unit (Normal): 1 00:11:17.401 Atomic Write Unit (PFail): 1 00:11:17.401 Atomic Compare & Write Unit: 1 00:11:17.401 Fused Compare & Write: Not Supported 00:11:17.401 Scatter-Gather List 00:11:17.401 SGL Command Set: Supported 00:11:17.401 SGL Keyed: Not Supported 00:11:17.401 SGL Bit Bucket Descriptor: Not Supported 00:11:17.401 SGL Metadata Pointer: Not Supported 00:11:17.401 Oversized SGL: Not Supported 00:11:17.401 SGL Metadata Address: Not Supported 00:11:17.401 SGL Offset: Not Supported 00:11:17.401 Transport SGL Data Block: Not Supported 00:11:17.401 Replay Protected Memory Block: Not Supported 00:11:17.401 00:11:17.401 Firmware Slot Information 00:11:17.401 ========================= 00:11:17.401 Active slot: 1 00:11:17.401 Slot 1 Firmware Revision: 1.0 00:11:17.401 00:11:17.401 00:11:17.401 Commands Supported and Effects 00:11:17.401 ============================== 00:11:17.401 Admin Commands 00:11:17.401 -------------- 00:11:17.401 Delete I/O Submission Queue (00h): Supported 00:11:17.401 Create I/O Submission Queue (01h): Supported 00:11:17.401 Get Log Page (02h): Supported 00:11:17.401 Delete I/O Completion Queue (04h): Supported 00:11:17.401 Create I/O Completion Queue (05h): Supported 00:11:17.401 Identify (06h): Supported 00:11:17.401 Abort (08h): Supported 00:11:17.401 Set Features (09h): Supported 00:11:17.401 Get Features (0Ah): Supported 00:11:17.401 Asynchronous Event Request (0Ch): Supported 00:11:17.401 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:17.401 Directive Send (19h): Supported 00:11:17.401 Directive Receive (1Ah): Supported 00:11:17.401 Virtualization Management (1Ch): Supported 00:11:17.401 Doorbell Buffer Config (7Ch): Supported 00:11:17.401 Format NVM (80h): Supported LBA-Change 00:11:17.401 I/O Commands 00:11:17.401 ------------ 00:11:17.401 Flush (00h): Supported LBA-Change 00:11:17.401 Write (01h): Supported LBA-Change 00:11:17.401 Read (02h): Supported 00:11:17.401 Compare (05h): Supported 00:11:17.401 Write Zeroes (08h): Supported LBA-Change 00:11:17.401 Dataset Management (09h): Supported LBA-Change 00:11:17.401 Unknown (0Ch): Supported 00:11:17.401 Unknown (12h): Supported 00:11:17.401 Copy (19h): Supported LBA-Change 00:11:17.401 Unknown (1Dh): Supported LBA-Change 00:11:17.401 00:11:17.401 Error Log 00:11:17.401 ========= 00:11:17.401 00:11:17.401 Arbitration 00:11:17.401 =========== 00:11:17.401 Arbitration Burst: no limit 00:11:17.401 00:11:17.401 Power Management 00:11:17.401 ================ 00:11:17.401 Number of Power States: 1 00:11:17.401 Current Power State: Power State #0 00:11:17.401 Power State #0: 00:11:17.401 Max Power: 25.00 W 00:11:17.401 Non-Operational State: Operational 00:11:17.401 Entry Latency: 16 microseconds 00:11:17.401 Exit Latency: 4 microseconds 00:11:17.401 Relative Read Throughput: 0 00:11:17.401 Relative Read Latency: 0 00:11:17.401 Relative Write Throughput: 0 00:11:17.401 Relative Write Latency: 0 00:11:17.401 Idle Power: Not Reported 00:11:17.401 Active Power: Not Reported 00:11:17.401 Non-Operational Permissive Mode: Not Supported 00:11:17.401 00:11:17.401 Health Information 00:11:17.401 ================== 00:11:17.401 Critical Warnings: 00:11:17.401 Available Spare Space: OK 00:11:17.401 Temperature: OK 00:11:17.401 Device Reliability: OK 00:11:17.401 Read Only: No 00:11:17.401 Volatile Memory Backup: OK 00:11:17.401 Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.401 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:17.401 Available Spare: 0% 00:11:17.401 Available Spare Threshold: 0% 00:11:17.401 Life Percentage Used: 0% 00:11:17.401 Data Units Read: 1086 00:11:17.401 Data Units Written: 950 00:11:17.401 Host Read Commands: 60171 00:11:17.401 Host Write Commands: 58911 00:11:17.401 Controller Busy Time: 0 minutes 00:11:17.401 Power Cycles: 0 00:11:17.401 Power On Hours: 0 hours 00:11:17.401 Unsafe Shutdowns: 0 00:11:17.401 Unrecoverable Media Errors: 0 00:11:17.401 Lifetime Error Log Entries: 0 00:11:17.401 Warning Temperature Time: 0 minutes 00:11:17.401 Critical Temperature Time: 0 minutes 00:11:17.401 00:11:17.401 Number of Queues 00:11:17.401 ================ 00:11:17.401 Number of I/O Submission Queues: 64 00:11:17.401 Number of I/O Completion Queues: 64 00:11:17.401 00:11:17.401 ZNS Specific Controller Data 00:11:17.401 ============================ 00:11:17.401 Zone Append Size Limit: 0 00:11:17.401 00:11:17.401 00:11:17.401 Active Namespaces 00:11:17.401 ================= 00:11:17.401 Namespace ID:1 00:11:17.401 Error Recovery Timeout: Unlimited 00:11:17.401 Command Set Identifier: NVM (00h) 00:11:17.401 Deallocate: Supported 00:11:17.401 Deallocated/Unwritten Error: Supported 00:11:17.401 Deallocated Read Value: All 0x00 00:11:17.401 Deallocate in Write Zeroes: Not Supported 00:11:17.401 Deallocated Guard Field: 0xFFFF 00:11:17.401 Flush: Supported 00:11:17.401 Reservation: Not Supported 00:11:17.401 Namespace Sharing Capabilities: Private 00:11:17.401 Size (in LBAs): 1310720 (5GiB) 00:11:17.401 Capacity (in LBAs): 1310720 (5GiB) 00:11:17.401 Utilization (in LBAs): 1310720 (5GiB) 00:11:17.401 Thin Provisioning: Not Supported 00:11:17.401 Per-NS Atomic Units: No 00:11:17.401 Maximum Single Source Range Length: 128 00:11:17.401 Maximum Copy Length: 128 00:11:17.401 Maximum Source Range Count: 128 00:11:17.401 NGUID/EUI64 Never Reused: No 00:11:17.401 Namespace Write Protected: No 00:11:17.401 Number of LBA Formats: 8 00:11:17.401 Current LBA Format: LBA Format #04 00:11:17.402 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.402 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.402 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.402 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.402 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.402 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.402 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.402 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.402 00:11:17.402 NVM Specific Namespace Data 00:11:17.402 =========================== 00:11:17.402 Logical Block Storage Tag Mask: 0 00:11:17.402 Protection Information Capabilities: 00:11:17.402 16b Guard Protection Information Storage Tag Support: No 00:11:17.402 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:17.402 Storage Tag Check Read Support: No 00:11:17.402 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.402 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.402 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.402 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.402 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.402 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.402 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.402 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.402 19:29:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:17.402 19:29:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:11:17.661 ===================================================== 00:11:17.661 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:17.661 ===================================================== 00:11:17.661 Controller Capabilities/Features 00:11:17.661 ================================ 00:11:17.661 Vendor ID: 1b36 00:11:17.661 Subsystem Vendor ID: 1af4 00:11:17.661 Serial Number: 12342 00:11:17.661 Model Number: QEMU NVMe Ctrl 00:11:17.661 Firmware Version: 8.0.0 00:11:17.661 Recommended Arb Burst: 6 00:11:17.661 IEEE OUI Identifier: 00 54 52 00:11:17.661 Multi-path I/O 00:11:17.661 May have multiple subsystem ports: No 00:11:17.661 May have multiple controllers: No 00:11:17.661 Associated with SR-IOV VF: No 00:11:17.661 Max Data Transfer Size: 524288 00:11:17.661 Max Number of Namespaces: 256 00:11:17.661 Max Number of I/O Queues: 64 00:11:17.661 NVMe Specification Version (VS): 1.4 00:11:17.661 NVMe Specification Version (Identify): 1.4 00:11:17.661 Maximum Queue Entries: 2048 00:11:17.661 Contiguous Queues Required: Yes 00:11:17.661 Arbitration Mechanisms Supported 00:11:17.661 Weighted Round Robin: Not Supported 00:11:17.661 Vendor Specific: Not Supported 00:11:17.661 Reset Timeout: 7500 ms 00:11:17.661 Doorbell Stride: 4 bytes 00:11:17.661 NVM Subsystem Reset: Not Supported 00:11:17.661 Command Sets Supported 00:11:17.661 NVM Command Set: Supported 00:11:17.661 Boot Partition: Not Supported 00:11:17.661 Memory Page Size Minimum: 4096 bytes 00:11:17.661 Memory Page Size Maximum: 65536 bytes 00:11:17.661 Persistent Memory Region: Not Supported 00:11:17.661 Optional Asynchronous Events Supported 00:11:17.661 Namespace Attribute Notices: Supported 00:11:17.661 Firmware Activation Notices: Not Supported 00:11:17.661 ANA Change Notices: Not Supported 00:11:17.661 PLE Aggregate Log Change Notices: Not Supported 00:11:17.661 LBA Status Info Alert Notices: Not Supported 00:11:17.661 EGE Aggregate Log Change Notices: Not Supported 00:11:17.661 Normal NVM Subsystem Shutdown event: Not Supported 00:11:17.661 Zone Descriptor Change Notices: Not Supported 00:11:17.661 Discovery Log Change Notices: Not Supported 00:11:17.661 Controller Attributes 00:11:17.661 128-bit Host Identifier: Not Supported 00:11:17.661 Non-Operational Permissive Mode: Not Supported 00:11:17.661 NVM Sets: Not Supported 00:11:17.661 Read Recovery Levels: Not Supported 00:11:17.661 Endurance Groups: Not Supported 00:11:17.661 Predictable Latency Mode: Not Supported 00:11:17.661 Traffic Based Keep ALive: Not Supported 00:11:17.661 Namespace Granularity: Not Supported 00:11:17.661 SQ Associations: Not Supported 00:11:17.661 UUID List: Not Supported 00:11:17.661 Multi-Domain Subsystem: Not Supported 00:11:17.661 Fixed Capacity Management: Not Supported 00:11:17.661 Variable Capacity Management: Not Supported 00:11:17.661 Delete Endurance Group: Not Supported 00:11:17.662 Delete NVM Set: Not Supported 00:11:17.662 Extended LBA Formats Supported: Supported 00:11:17.662 Flexible Data Placement Supported: Not Supported 00:11:17.662 00:11:17.662 Controller Memory Buffer Support 00:11:17.662 ================================ 00:11:17.662 Supported: No 00:11:17.662 00:11:17.662 Persistent Memory Region Support 00:11:17.662 ================================ 00:11:17.662 Supported: No 00:11:17.662 00:11:17.662 Admin Command Set Attributes 00:11:17.662 ============================ 00:11:17.662 Security Send/Receive: Not Supported 00:11:17.662 Format NVM: Supported 00:11:17.662 Firmware Activate/Download: Not Supported 00:11:17.662 Namespace Management: Supported 00:11:17.662 Device Self-Test: Not Supported 00:11:17.662 Directives: Supported 00:11:17.662 NVMe-MI: Not Supported 00:11:17.662 Virtualization Management: Not Supported 00:11:17.662 Doorbell Buffer Config: Supported 00:11:17.662 Get LBA Status Capability: Not Supported 00:11:17.662 Command & Feature Lockdown Capability: Not Supported 00:11:17.662 Abort Command Limit: 4 00:11:17.662 Async Event Request Limit: 4 00:11:17.662 Number of Firmware Slots: N/A 00:11:17.662 Firmware Slot 1 Read-Only: N/A 00:11:17.662 Firmware Activation Without Reset: N/A 00:11:17.662 Multiple Update Detection Support: N/A 00:11:17.662 Firmware Update Granularity: No Information Provided 00:11:17.662 Per-Namespace SMART Log: Yes 00:11:17.662 Asymmetric Namespace Access Log Page: Not Supported 00:11:17.662 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:11:17.662 Command Effects Log Page: Supported 00:11:17.662 Get Log Page Extended Data: Supported 00:11:17.662 Telemetry Log Pages: Not Supported 00:11:17.662 Persistent Event Log Pages: Not Supported 00:11:17.662 Supported Log Pages Log Page: May Support 00:11:17.662 Commands Supported & Effects Log Page: Not Supported 00:11:17.662 Feature Identifiers & Effects Log Page:May Support 00:11:17.662 NVMe-MI Commands & Effects Log Page: May Support 00:11:17.662 Data Area 4 for Telemetry Log: Not Supported 00:11:17.662 Error Log Page Entries Supported: 1 00:11:17.662 Keep Alive: Not Supported 00:11:17.662 00:11:17.662 NVM Command Set Attributes 00:11:17.662 ========================== 00:11:17.662 Submission Queue Entry Size 00:11:17.662 Max: 64 00:11:17.662 Min: 64 00:11:17.662 Completion Queue Entry Size 00:11:17.662 Max: 16 00:11:17.662 Min: 16 00:11:17.662 Number of Namespaces: 256 00:11:17.662 Compare Command: Supported 00:11:17.662 Write Uncorrectable Command: Not Supported 00:11:17.662 Dataset Management Command: Supported 00:11:17.662 Write Zeroes Command: Supported 00:11:17.662 Set Features Save Field: Supported 00:11:17.662 Reservations: Not Supported 00:11:17.662 Timestamp: Supported 00:11:17.662 Copy: Supported 00:11:17.662 Volatile Write Cache: Present 00:11:17.662 Atomic Write Unit (Normal): 1 00:11:17.662 Atomic Write Unit (PFail): 1 00:11:17.662 Atomic Compare & Write Unit: 1 00:11:17.662 Fused Compare & Write: Not Supported 00:11:17.662 Scatter-Gather List 00:11:17.662 SGL Command Set: Supported 00:11:17.662 SGL Keyed: Not Supported 00:11:17.662 SGL Bit Bucket Descriptor: Not Supported 00:11:17.662 SGL Metadata Pointer: Not Supported 00:11:17.662 Oversized SGL: Not Supported 00:11:17.662 SGL Metadata Address: Not Supported 00:11:17.662 SGL Offset: Not Supported 00:11:17.662 Transport SGL Data Block: Not Supported 00:11:17.662 Replay Protected Memory Block: Not Supported 00:11:17.662 00:11:17.662 Firmware Slot Information 00:11:17.662 ========================= 00:11:17.662 Active slot: 1 00:11:17.662 Slot 1 Firmware Revision: 1.0 00:11:17.662 00:11:17.662 00:11:17.662 Commands Supported and Effects 00:11:17.662 ============================== 00:11:17.662 Admin Commands 00:11:17.662 -------------- 00:11:17.662 Delete I/O Submission Queue (00h): Supported 00:11:17.662 Create I/O Submission Queue (01h): Supported 00:11:17.662 Get Log Page (02h): Supported 00:11:17.662 Delete I/O Completion Queue (04h): Supported 00:11:17.662 Create I/O Completion Queue (05h): Supported 00:11:17.662 Identify (06h): Supported 00:11:17.662 Abort (08h): Supported 00:11:17.662 Set Features (09h): Supported 00:11:17.662 Get Features (0Ah): Supported 00:11:17.662 Asynchronous Event Request (0Ch): Supported 00:11:17.662 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:17.662 Directive Send (19h): Supported 00:11:17.662 Directive Receive (1Ah): Supported 00:11:17.662 Virtualization Management (1Ch): Supported 00:11:17.662 Doorbell Buffer Config (7Ch): Supported 00:11:17.662 Format NVM (80h): Supported LBA-Change 00:11:17.662 I/O Commands 00:11:17.662 ------------ 00:11:17.662 Flush (00h): Supported LBA-Change 00:11:17.662 Write (01h): Supported LBA-Change 00:11:17.662 Read (02h): Supported 00:11:17.662 Compare (05h): Supported 00:11:17.662 Write Zeroes (08h): Supported LBA-Change 00:11:17.662 Dataset Management (09h): Supported LBA-Change 00:11:17.662 Unknown (0Ch): Supported 00:11:17.662 Unknown (12h): Supported 00:11:17.662 Copy (19h): Supported LBA-Change 00:11:17.662 Unknown (1Dh): Supported LBA-Change 00:11:17.662 00:11:17.662 Error Log 00:11:17.662 ========= 00:11:17.662 00:11:17.662 Arbitration 00:11:17.662 =========== 00:11:17.662 Arbitration Burst: no limit 00:11:17.662 00:11:17.662 Power Management 00:11:17.662 ================ 00:11:17.662 Number of Power States: 1 00:11:17.662 Current Power State: Power State #0 00:11:17.662 Power State #0: 00:11:17.662 Max Power: 25.00 W 00:11:17.662 Non-Operational State: Operational 00:11:17.662 Entry Latency: 16 microseconds 00:11:17.662 Exit Latency: 4 microseconds 00:11:17.662 Relative Read Throughput: 0 00:11:17.662 Relative Read Latency: 0 00:11:17.662 Relative Write Throughput: 0 00:11:17.662 Relative Write Latency: 0 00:11:17.662 Idle Power: Not Reported 00:11:17.662 Active Power: Not Reported 00:11:17.662 Non-Operational Permissive Mode: Not Supported 00:11:17.662 00:11:17.662 Health Information 00:11:17.662 ================== 00:11:17.662 Critical Warnings: 00:11:17.662 Available Spare Space: OK 00:11:17.662 Temperature: OK 00:11:17.662 Device Reliability: OK 00:11:17.662 Read Only: No 00:11:17.662 Volatile Memory Backup: OK 00:11:17.662 Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.662 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:17.662 Available Spare: 0% 00:11:17.662 Available Spare Threshold: 0% 00:11:17.662 Life Percentage Used: 0% 00:11:17.662 Data Units Read: 2294 00:11:17.662 Data Units Written: 2081 00:11:17.662 Host Read Commands: 124781 00:11:17.662 Host Write Commands: 123050 00:11:17.662 Controller Busy Time: 0 minutes 00:11:17.662 Power Cycles: 0 00:11:17.662 Power On Hours: 0 hours 00:11:17.662 Unsafe Shutdowns: 0 00:11:17.662 Unrecoverable Media Errors: 0 00:11:17.662 Lifetime Error Log Entries: 0 00:11:17.662 Warning Temperature Time: 0 minutes 00:11:17.662 Critical Temperature Time: 0 minutes 00:11:17.662 00:11:17.662 Number of Queues 00:11:17.662 ================ 00:11:17.662 Number of I/O Submission Queues: 64 00:11:17.662 Number of I/O Completion Queues: 64 00:11:17.662 00:11:17.662 ZNS Specific Controller Data 00:11:17.662 ============================ 00:11:17.662 Zone Append Size Limit: 0 00:11:17.663 00:11:17.663 00:11:17.663 Active Namespaces 00:11:17.663 ================= 00:11:17.663 Namespace ID:1 00:11:17.663 Error Recovery Timeout: Unlimited 00:11:17.663 Command Set Identifier: NVM (00h) 00:11:17.663 Deallocate: Supported 00:11:17.663 Deallocated/Unwritten Error: Supported 00:11:17.663 Deallocated Read Value: All 0x00 00:11:17.663 Deallocate in Write Zeroes: Not Supported 00:11:17.663 Deallocated Guard Field: 0xFFFF 00:11:17.663 Flush: Supported 00:11:17.663 Reservation: Not Supported 00:11:17.663 Namespace Sharing Capabilities: Private 00:11:17.663 Size (in LBAs): 1048576 (4GiB) 00:11:17.663 Capacity (in LBAs): 1048576 (4GiB) 00:11:17.663 Utilization (in LBAs): 1048576 (4GiB) 00:11:17.663 Thin Provisioning: Not Supported 00:11:17.663 Per-NS Atomic Units: No 00:11:17.663 Maximum Single Source Range Length: 128 00:11:17.663 Maximum Copy Length: 128 00:11:17.663 Maximum Source Range Count: 128 00:11:17.663 NGUID/EUI64 Never Reused: No 00:11:17.663 Namespace Write Protected: No 00:11:17.663 Number of LBA Formats: 8 00:11:17.663 Current LBA Format: LBA Format #04 00:11:17.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.663 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.663 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.663 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.663 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.663 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.663 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.663 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.663 00:11:17.663 NVM Specific Namespace Data 00:11:17.663 =========================== 00:11:17.663 Logical Block Storage Tag Mask: 0 00:11:17.663 Protection Information Capabilities: 00:11:17.663 16b Guard Protection Information Storage Tag Support: No 00:11:17.663 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:17.663 Storage Tag Check Read Support: No 00:11:17.663 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Namespace ID:2 00:11:17.663 Error Recovery Timeout: Unlimited 00:11:17.663 Command Set Identifier: NVM (00h) 00:11:17.663 Deallocate: Supported 00:11:17.663 Deallocated/Unwritten Error: Supported 00:11:17.663 Deallocated Read Value: All 0x00 00:11:17.663 Deallocate in Write Zeroes: Not Supported 00:11:17.663 Deallocated Guard Field: 0xFFFF 00:11:17.663 Flush: Supported 00:11:17.663 Reservation: Not Supported 00:11:17.663 Namespace Sharing Capabilities: Private 00:11:17.663 Size (in LBAs): 1048576 (4GiB) 00:11:17.663 Capacity (in LBAs): 1048576 (4GiB) 00:11:17.663 Utilization (in LBAs): 1048576 (4GiB) 00:11:17.663 Thin Provisioning: Not Supported 00:11:17.663 Per-NS Atomic Units: No 00:11:17.663 Maximum Single Source Range Length: 128 00:11:17.663 Maximum Copy Length: 128 00:11:17.663 Maximum Source Range Count: 128 00:11:17.663 NGUID/EUI64 Never Reused: No 00:11:17.663 Namespace Write Protected: No 00:11:17.663 Number of LBA Formats: 8 00:11:17.663 Current LBA Format: LBA Format #04 00:11:17.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.663 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.663 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.663 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.663 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.663 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.663 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.663 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.663 00:11:17.663 NVM Specific Namespace Data 00:11:17.663 =========================== 00:11:17.663 Logical Block Storage Tag Mask: 0 00:11:17.663 Protection Information Capabilities: 00:11:17.663 16b Guard Protection Information Storage Tag Support: No 00:11:17.663 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:17.663 Storage Tag Check Read Support: No 00:11:17.663 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Namespace ID:3 00:11:17.663 Error Recovery Timeout: Unlimited 00:11:17.663 Command Set Identifier: NVM (00h) 00:11:17.663 Deallocate: Supported 00:11:17.663 Deallocated/Unwritten Error: Supported 00:11:17.663 Deallocated Read Value: All 0x00 00:11:17.663 Deallocate in Write Zeroes: Not Supported 00:11:17.663 Deallocated Guard Field: 0xFFFF 00:11:17.663 Flush: Supported 00:11:17.663 Reservation: Not Supported 00:11:17.663 Namespace Sharing Capabilities: Private 00:11:17.663 Size (in LBAs): 1048576 (4GiB) 00:11:17.663 Capacity (in LBAs): 1048576 (4GiB) 00:11:17.663 Utilization (in LBAs): 1048576 (4GiB) 00:11:17.663 Thin Provisioning: Not Supported 00:11:17.663 Per-NS Atomic Units: No 00:11:17.663 Maximum Single Source Range Length: 128 00:11:17.663 Maximum Copy Length: 128 00:11:17.663 Maximum Source Range Count: 128 00:11:17.663 NGUID/EUI64 Never Reused: No 00:11:17.663 Namespace Write Protected: No 00:11:17.663 Number of LBA Formats: 8 00:11:17.663 Current LBA Format: LBA Format #04 00:11:17.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.663 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.663 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.663 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.663 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.663 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.663 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.663 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.663 00:11:17.663 NVM Specific Namespace Data 00:11:17.663 =========================== 00:11:17.663 Logical Block Storage Tag Mask: 0 00:11:17.663 Protection Information Capabilities: 00:11:17.663 16b Guard Protection Information Storage Tag Support: No 00:11:17.663 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:17.663 Storage Tag Check Read Support: No 00:11:17.663 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.663 19:29:36 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:11:17.663 19:29:36 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:11:17.923 ===================================================== 00:11:17.923 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:17.923 ===================================================== 00:11:17.923 Controller Capabilities/Features 00:11:17.923 ================================ 00:11:17.923 Vendor ID: 1b36 00:11:17.923 Subsystem Vendor ID: 1af4 00:11:17.923 Serial Number: 12343 00:11:17.923 Model Number: QEMU NVMe Ctrl 00:11:17.923 Firmware Version: 8.0.0 00:11:17.923 Recommended Arb Burst: 6 00:11:17.923 IEEE OUI Identifier: 00 54 52 00:11:17.923 Multi-path I/O 00:11:17.923 May have multiple subsystem ports: No 00:11:17.923 May have multiple controllers: Yes 00:11:17.923 Associated with SR-IOV VF: No 00:11:17.923 Max Data Transfer Size: 524288 00:11:17.923 Max Number of Namespaces: 256 00:11:17.923 Max Number of I/O Queues: 64 00:11:17.923 NVMe Specification Version (VS): 1.4 00:11:17.923 NVMe Specification Version (Identify): 1.4 00:11:17.923 Maximum Queue Entries: 2048 00:11:17.923 Contiguous Queues Required: Yes 00:11:17.923 Arbitration Mechanisms Supported 00:11:17.923 Weighted Round Robin: Not Supported 00:11:17.923 Vendor Specific: Not Supported 00:11:17.923 Reset Timeout: 7500 ms 00:11:17.923 Doorbell Stride: 4 bytes 00:11:17.923 NVM Subsystem Reset: Not Supported 00:11:17.923 Command Sets Supported 00:11:17.923 NVM Command Set: Supported 00:11:17.923 Boot Partition: Not Supported 00:11:17.923 Memory Page Size Minimum: 4096 bytes 00:11:17.923 Memory Page Size Maximum: 65536 bytes 00:11:17.923 Persistent Memory Region: Not Supported 00:11:17.923 Optional Asynchronous Events Supported 00:11:17.923 Namespace Attribute Notices: Supported 00:11:17.923 Firmware Activation Notices: Not Supported 00:11:17.923 ANA Change Notices: Not Supported 00:11:17.923 PLE Aggregate Log Change Notices: Not Supported 00:11:17.923 LBA Status Info Alert Notices: Not Supported 00:11:17.923 EGE Aggregate Log Change Notices: Not Supported 00:11:17.923 Normal NVM Subsystem Shutdown event: Not Supported 00:11:17.923 Zone Descriptor Change Notices: Not Supported 00:11:17.923 Discovery Log Change Notices: Not Supported 00:11:17.923 Controller Attributes 00:11:17.923 128-bit Host Identifier: Not Supported 00:11:17.923 Non-Operational Permissive Mode: Not Supported 00:11:17.923 NVM Sets: Not Supported 00:11:17.923 Read Recovery Levels: Not Supported 00:11:17.923 Endurance Groups: Supported 00:11:17.923 Predictable Latency Mode: Not Supported 00:11:17.923 Traffic Based Keep ALive: Not Supported 00:11:17.923 Namespace Granularity: Not Supported 00:11:17.923 SQ Associations: Not Supported 00:11:17.923 UUID List: Not Supported 00:11:17.923 Multi-Domain Subsystem: Not Supported 00:11:17.923 Fixed Capacity Management: Not Supported 00:11:17.923 Variable Capacity Management: Not Supported 00:11:17.923 Delete Endurance Group: Not Supported 00:11:17.923 Delete NVM Set: Not Supported 00:11:17.923 Extended LBA Formats Supported: Supported 00:11:17.923 Flexible Data Placement Supported: Supported 00:11:17.923 00:11:17.923 Controller Memory Buffer Support 00:11:17.923 ================================ 00:11:17.923 Supported: No 00:11:17.923 00:11:17.923 Persistent Memory Region Support 00:11:17.923 ================================ 00:11:17.923 Supported: No 00:11:17.923 00:11:17.923 Admin Command Set Attributes 00:11:17.923 ============================ 00:11:17.923 Security Send/Receive: Not Supported 00:11:17.923 Format NVM: Supported 00:11:17.923 Firmware Activate/Download: Not Supported 00:11:17.923 Namespace Management: Supported 00:11:17.923 Device Self-Test: Not Supported 00:11:17.923 Directives: Supported 00:11:17.923 NVMe-MI: Not Supported 00:11:17.923 Virtualization Management: Not Supported 00:11:17.923 Doorbell Buffer Config: Supported 00:11:17.923 Get LBA Status Capability: Not Supported 00:11:17.923 Command & Feature Lockdown Capability: Not Supported 00:11:17.923 Abort Command Limit: 4 00:11:17.923 Async Event Request Limit: 4 00:11:17.923 Number of Firmware Slots: N/A 00:11:17.923 Firmware Slot 1 Read-Only: N/A 00:11:17.923 Firmware Activation Without Reset: N/A 00:11:17.923 Multiple Update Detection Support: N/A 00:11:17.924 Firmware Update Granularity: No Information Provided 00:11:17.924 Per-Namespace SMART Log: Yes 00:11:17.924 Asymmetric Namespace Access Log Page: Not Supported 00:11:17.924 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:11:17.924 Command Effects Log Page: Supported 00:11:17.924 Get Log Page Extended Data: Supported 00:11:17.924 Telemetry Log Pages: Not Supported 00:11:17.924 Persistent Event Log Pages: Not Supported 00:11:17.924 Supported Log Pages Log Page: May Support 00:11:17.924 Commands Supported & Effects Log Page: Not Supported 00:11:17.924 Feature Identifiers & Effects Log Page:May Support 00:11:17.924 NVMe-MI Commands & Effects Log Page: May Support 00:11:17.924 Data Area 4 for Telemetry Log: Not Supported 00:11:17.924 Error Log Page Entries Supported: 1 00:11:17.924 Keep Alive: Not Supported 00:11:17.924 00:11:17.924 NVM Command Set Attributes 00:11:17.924 ========================== 00:11:17.924 Submission Queue Entry Size 00:11:17.924 Max: 64 00:11:17.924 Min: 64 00:11:17.924 Completion Queue Entry Size 00:11:17.924 Max: 16 00:11:17.924 Min: 16 00:11:17.924 Number of Namespaces: 256 00:11:17.924 Compare Command: Supported 00:11:17.924 Write Uncorrectable Command: Not Supported 00:11:17.924 Dataset Management Command: Supported 00:11:17.924 Write Zeroes Command: Supported 00:11:17.924 Set Features Save Field: Supported 00:11:17.924 Reservations: Not Supported 00:11:17.924 Timestamp: Supported 00:11:17.924 Copy: Supported 00:11:17.924 Volatile Write Cache: Present 00:11:17.924 Atomic Write Unit (Normal): 1 00:11:17.924 Atomic Write Unit (PFail): 1 00:11:17.924 Atomic Compare & Write Unit: 1 00:11:17.924 Fused Compare & Write: Not Supported 00:11:17.924 Scatter-Gather List 00:11:17.924 SGL Command Set: Supported 00:11:17.924 SGL Keyed: Not Supported 00:11:17.924 SGL Bit Bucket Descriptor: Not Supported 00:11:17.924 SGL Metadata Pointer: Not Supported 00:11:17.924 Oversized SGL: Not Supported 00:11:17.924 SGL Metadata Address: Not Supported 00:11:17.924 SGL Offset: Not Supported 00:11:17.924 Transport SGL Data Block: Not Supported 00:11:17.924 Replay Protected Memory Block: Not Supported 00:11:17.924 00:11:17.924 Firmware Slot Information 00:11:17.924 ========================= 00:11:17.924 Active slot: 1 00:11:17.924 Slot 1 Firmware Revision: 1.0 00:11:17.924 00:11:17.924 00:11:17.924 Commands Supported and Effects 00:11:17.924 ============================== 00:11:17.924 Admin Commands 00:11:17.924 -------------- 00:11:17.924 Delete I/O Submission Queue (00h): Supported 00:11:17.924 Create I/O Submission Queue (01h): Supported 00:11:17.924 Get Log Page (02h): Supported 00:11:17.924 Delete I/O Completion Queue (04h): Supported 00:11:17.924 Create I/O Completion Queue (05h): Supported 00:11:17.924 Identify (06h): Supported 00:11:17.924 Abort (08h): Supported 00:11:17.924 Set Features (09h): Supported 00:11:17.924 Get Features (0Ah): Supported 00:11:17.924 Asynchronous Event Request (0Ch): Supported 00:11:17.924 Namespace Attachment (15h): Supported NS-Inventory-Change 00:11:17.924 Directive Send (19h): Supported 00:11:17.924 Directive Receive (1Ah): Supported 00:11:17.924 Virtualization Management (1Ch): Supported 00:11:17.924 Doorbell Buffer Config (7Ch): Supported 00:11:17.924 Format NVM (80h): Supported LBA-Change 00:11:17.924 I/O Commands 00:11:17.924 ------------ 00:11:17.924 Flush (00h): Supported LBA-Change 00:11:17.924 Write (01h): Supported LBA-Change 00:11:17.924 Read (02h): Supported 00:11:17.924 Compare (05h): Supported 00:11:17.924 Write Zeroes (08h): Supported LBA-Change 00:11:17.924 Dataset Management (09h): Supported LBA-Change 00:11:17.924 Unknown (0Ch): Supported 00:11:17.924 Unknown (12h): Supported 00:11:17.924 Copy (19h): Supported LBA-Change 00:11:17.924 Unknown (1Dh): Supported LBA-Change 00:11:17.924 00:11:17.924 Error Log 00:11:17.924 ========= 00:11:17.924 00:11:17.924 Arbitration 00:11:17.924 =========== 00:11:17.924 Arbitration Burst: no limit 00:11:17.924 00:11:17.924 Power Management 00:11:17.924 ================ 00:11:17.924 Number of Power States: 1 00:11:17.924 Current Power State: Power State #0 00:11:17.924 Power State #0: 00:11:17.924 Max Power: 25.00 W 00:11:17.924 Non-Operational State: Operational 00:11:17.924 Entry Latency: 16 microseconds 00:11:17.924 Exit Latency: 4 microseconds 00:11:17.924 Relative Read Throughput: 0 00:11:17.924 Relative Read Latency: 0 00:11:17.924 Relative Write Throughput: 0 00:11:17.924 Relative Write Latency: 0 00:11:17.924 Idle Power: Not Reported 00:11:17.924 Active Power: Not Reported 00:11:17.924 Non-Operational Permissive Mode: Not Supported 00:11:17.924 00:11:17.924 Health Information 00:11:17.924 ================== 00:11:17.924 Critical Warnings: 00:11:17.924 Available Spare Space: OK 00:11:17.924 Temperature: OK 00:11:17.924 Device Reliability: OK 00:11:17.924 Read Only: No 00:11:17.924 Volatile Memory Backup: OK 00:11:17.924 Current Temperature: 323 Kelvin (50 Celsius) 00:11:17.924 Temperature Threshold: 343 Kelvin (70 Celsius) 00:11:17.924 Available Spare: 0% 00:11:17.924 Available Spare Threshold: 0% 00:11:17.924 Life Percentage Used: 0% 00:11:17.924 Data Units Read: 888 00:11:17.924 Data Units Written: 817 00:11:17.924 Host Read Commands: 42712 00:11:17.924 Host Write Commands: 42137 00:11:17.924 Controller Busy Time: 0 minutes 00:11:17.924 Power Cycles: 0 00:11:17.924 Power On Hours: 0 hours 00:11:17.924 Unsafe Shutdowns: 0 00:11:17.924 Unrecoverable Media Errors: 0 00:11:17.924 Lifetime Error Log Entries: 0 00:11:17.924 Warning Temperature Time: 0 minutes 00:11:17.924 Critical Temperature Time: 0 minutes 00:11:17.924 00:11:17.924 Number of Queues 00:11:17.924 ================ 00:11:17.924 Number of I/O Submission Queues: 64 00:11:17.924 Number of I/O Completion Queues: 64 00:11:17.924 00:11:17.924 ZNS Specific Controller Data 00:11:17.924 ============================ 00:11:17.924 Zone Append Size Limit: 0 00:11:17.924 00:11:17.924 00:11:17.924 Active Namespaces 00:11:17.924 ================= 00:11:17.924 Namespace ID:1 00:11:17.924 Error Recovery Timeout: Unlimited 00:11:17.924 Command Set Identifier: NVM (00h) 00:11:17.924 Deallocate: Supported 00:11:17.924 Deallocated/Unwritten Error: Supported 00:11:17.924 Deallocated Read Value: All 0x00 00:11:17.924 Deallocate in Write Zeroes: Not Supported 00:11:17.924 Deallocated Guard Field: 0xFFFF 00:11:17.924 Flush: Supported 00:11:17.924 Reservation: Not Supported 00:11:17.924 Namespace Sharing Capabilities: Multiple Controllers 00:11:17.924 Size (in LBAs): 262144 (1GiB) 00:11:17.924 Capacity (in LBAs): 262144 (1GiB) 00:11:17.924 Utilization (in LBAs): 262144 (1GiB) 00:11:17.924 Thin Provisioning: Not Supported 00:11:17.924 Per-NS Atomic Units: No 00:11:17.924 Maximum Single Source Range Length: 128 00:11:17.924 Maximum Copy Length: 128 00:11:17.924 Maximum Source Range Count: 128 00:11:17.924 NGUID/EUI64 Never Reused: No 00:11:17.924 Namespace Write Protected: No 00:11:17.924 Endurance group ID: 1 00:11:17.924 Number of LBA Formats: 8 00:11:17.924 Current LBA Format: LBA Format #04 00:11:17.924 LBA Format #00: Data Size: 512 Metadata Size: 0 00:11:17.924 LBA Format #01: Data Size: 512 Metadata Size: 8 00:11:17.924 LBA Format #02: Data Size: 512 Metadata Size: 16 00:11:17.924 LBA Format #03: Data Size: 512 Metadata Size: 64 00:11:17.924 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:11:17.924 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:11:17.924 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:11:17.924 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:11:17.924 00:11:17.924 Get Feature FDP: 00:11:17.924 ================ 00:11:17.924 Enabled: Yes 00:11:17.924 FDP configuration index: 0 00:11:17.924 00:11:17.924 FDP configurations log page 00:11:17.924 =========================== 00:11:17.924 Number of FDP configurations: 1 00:11:17.924 Version: 0 00:11:17.924 Size: 112 00:11:17.924 FDP Configuration Descriptor: 0 00:11:17.924 Descriptor Size: 96 00:11:17.924 Reclaim Group Identifier format: 2 00:11:17.924 FDP Volatile Write Cache: Not Present 00:11:17.924 FDP Configuration: Valid 00:11:17.924 Vendor Specific Size: 0 00:11:17.924 Number of Reclaim Groups: 2 00:11:17.924 Number of Recalim Unit Handles: 8 00:11:17.924 Max Placement Identifiers: 128 00:11:17.924 Number of Namespaces Suppprted: 256 00:11:17.924 Reclaim unit Nominal Size: 6000000 bytes 00:11:17.924 Estimated Reclaim Unit Time Limit: Not Reported 00:11:17.924 RUH Desc #000: RUH Type: Initially Isolated 00:11:17.924 RUH Desc #001: RUH Type: Initially Isolated 00:11:17.924 RUH Desc #002: RUH Type: Initially Isolated 00:11:17.924 RUH Desc #003: RUH Type: Initially Isolated 00:11:17.924 RUH Desc #004: RUH Type: Initially Isolated 00:11:17.924 RUH Desc #005: RUH Type: Initially Isolated 00:11:17.924 RUH Desc #006: RUH Type: Initially Isolated 00:11:17.924 RUH Desc #007: RUH Type: Initially Isolated 00:11:17.924 00:11:17.924 FDP reclaim unit handle usage log page 00:11:17.924 ====================================== 00:11:17.924 Number of Reclaim Unit Handles: 8 00:11:17.924 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:17.924 RUH Usage Desc #001: RUH Attributes: Unused 00:11:17.924 RUH Usage Desc #002: RUH Attributes: Unused 00:11:17.925 RUH Usage Desc #003: RUH Attributes: Unused 00:11:17.925 RUH Usage Desc #004: RUH Attributes: Unused 00:11:17.925 RUH Usage Desc #005: RUH Attributes: Unused 00:11:17.925 RUH Usage Desc #006: RUH Attributes: Unused 00:11:17.925 RUH Usage Desc #007: RUH Attributes: Unused 00:11:17.925 00:11:17.925 FDP statistics log page 00:11:17.925 ======================= 00:11:17.925 Host bytes with metadata written: 522493952 00:11:17.925 Media bytes with metadata written: 522551296 00:11:17.925 Media bytes erased: 0 00:11:17.925 00:11:17.925 FDP events log page 00:11:17.925 =================== 00:11:17.925 Number of FDP events: 0 00:11:17.925 00:11:17.925 NVM Specific Namespace Data 00:11:17.925 =========================== 00:11:17.925 Logical Block Storage Tag Mask: 0 00:11:17.925 Protection Information Capabilities: 00:11:17.925 16b Guard Protection Information Storage Tag Support: No 00:11:17.925 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:11:17.925 Storage Tag Check Read Support: No 00:11:17.925 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.925 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.925 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.925 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.925 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.925 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.925 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.925 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:11:17.925 00:11:17.925 real 0m1.225s 00:11:17.925 user 0m0.466s 00:11:17.925 sys 0m0.541s 00:11:17.925 19:29:36 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:17.925 19:29:36 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:11:17.925 ************************************ 00:11:17.925 END TEST nvme_identify 00:11:17.925 ************************************ 00:11:17.925 19:29:36 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:11:17.925 19:29:36 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:17.925 19:29:36 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:17.925 19:29:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:17.925 ************************************ 00:11:17.925 START TEST nvme_perf 00:11:17.925 ************************************ 00:11:17.925 19:29:36 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:11:17.925 19:29:36 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:11:19.302 Initializing NVMe Controllers 00:11:19.302 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:19.302 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:19.302 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:19.302 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:19.302 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:19.302 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:19.302 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:19.302 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:19.302 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:19.302 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:19.302 Initialization complete. Launching workers. 00:11:19.302 ======================================================== 00:11:19.302 Latency(us) 00:11:19.302 Device Information : IOPS MiB/s Average min max 00:11:19.303 PCIE (0000:00:10.0) NSID 1 from core 0: 18574.30 217.67 6900.03 5482.31 32771.26 00:11:19.303 PCIE (0000:00:11.0) NSID 1 from core 0: 18574.30 217.67 6890.78 5566.03 31000.69 00:11:19.303 PCIE (0000:00:13.0) NSID 1 from core 0: 18574.30 217.67 6880.32 5551.66 29779.71 00:11:19.303 PCIE (0000:00:12.0) NSID 1 from core 0: 18574.30 217.67 6869.73 5567.36 28007.49 00:11:19.303 PCIE (0000:00:12.0) NSID 2 from core 0: 18574.30 217.67 6858.84 5609.52 26235.96 00:11:19.303 PCIE (0000:00:12.0) NSID 3 from core 0: 18638.12 218.42 6824.46 5578.41 20965.79 00:11:19.303 ======================================================== 00:11:19.303 Total : 111509.60 1306.75 6870.67 5482.31 32771.26 00:11:19.303 00:11:19.303 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:19.303 ================================================================================= 00:11:19.303 1.00000% : 5822.622us 00:11:19.303 10.00000% : 6099.889us 00:11:19.303 25.00000% : 6301.538us 00:11:19.303 50.00000% : 6604.012us 00:11:19.303 75.00000% : 6956.898us 00:11:19.303 90.00000% : 7461.022us 00:11:19.303 95.00000% : 8418.855us 00:11:19.303 98.00000% : 9880.812us 00:11:19.303 99.00000% : 11746.068us 00:11:19.303 99.50000% : 27424.295us 00:11:19.303 99.90000% : 32465.526us 00:11:19.303 99.99000% : 32868.825us 00:11:19.303 99.99900% : 32868.825us 00:11:19.303 99.99990% : 32868.825us 00:11:19.303 99.99999% : 32868.825us 00:11:19.303 00:11:19.303 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:19.303 ================================================================================= 00:11:19.303 1.00000% : 5873.034us 00:11:19.303 10.00000% : 6150.302us 00:11:19.303 25.00000% : 6351.951us 00:11:19.303 50.00000% : 6604.012us 00:11:19.303 75.00000% : 6906.486us 00:11:19.303 90.00000% : 7511.434us 00:11:19.303 95.00000% : 8368.443us 00:11:19.303 98.00000% : 9830.400us 00:11:19.303 99.00000% : 11544.418us 00:11:19.303 99.50000% : 25710.277us 00:11:19.303 99.90000% : 30650.683us 00:11:19.303 99.99000% : 31053.982us 00:11:19.303 99.99900% : 31053.982us 00:11:19.303 99.99990% : 31053.982us 00:11:19.303 99.99999% : 31053.982us 00:11:19.303 00:11:19.303 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:19.303 ================================================================================= 00:11:19.303 1.00000% : 5847.828us 00:11:19.303 10.00000% : 6150.302us 00:11:19.303 25.00000% : 6326.745us 00:11:19.303 50.00000% : 6604.012us 00:11:19.303 75.00000% : 6856.074us 00:11:19.303 90.00000% : 7511.434us 00:11:19.303 95.00000% : 8469.268us 00:11:19.303 98.00000% : 10183.286us 00:11:19.303 99.00000% : 11393.182us 00:11:19.303 99.50000% : 24500.382us 00:11:19.303 99.90000% : 29440.788us 00:11:19.303 99.99000% : 29844.086us 00:11:19.303 99.99900% : 29844.086us 00:11:19.303 99.99990% : 29844.086us 00:11:19.303 99.99999% : 29844.086us 00:11:19.303 00:11:19.303 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:19.303 ================================================================================= 00:11:19.303 1.00000% : 5847.828us 00:11:19.303 10.00000% : 6150.302us 00:11:19.303 25.00000% : 6326.745us 00:11:19.303 50.00000% : 6604.012us 00:11:19.303 75.00000% : 6906.486us 00:11:19.303 90.00000% : 7461.022us 00:11:19.303 95.00000% : 8519.680us 00:11:19.303 98.00000% : 10183.286us 00:11:19.303 99.00000% : 11393.182us 00:11:19.303 99.50000% : 22786.363us 00:11:19.303 99.90000% : 27625.945us 00:11:19.303 99.99000% : 28029.243us 00:11:19.303 99.99900% : 28029.243us 00:11:19.303 99.99990% : 28029.243us 00:11:19.303 99.99999% : 28029.243us 00:11:19.303 00:11:19.303 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:19.303 ================================================================================= 00:11:19.303 1.00000% : 5873.034us 00:11:19.303 10.00000% : 6150.302us 00:11:19.303 25.00000% : 6326.745us 00:11:19.303 50.00000% : 6604.012us 00:11:19.303 75.00000% : 6856.074us 00:11:19.303 90.00000% : 7461.022us 00:11:19.303 95.00000% : 8570.092us 00:11:19.303 98.00000% : 9931.225us 00:11:19.303 99.00000% : 11897.305us 00:11:19.303 99.50000% : 20971.520us 00:11:19.303 99.90000% : 25811.102us 00:11:19.303 99.99000% : 26214.400us 00:11:19.303 99.99900% : 26416.049us 00:11:19.303 99.99990% : 26416.049us 00:11:19.303 99.99999% : 26416.049us 00:11:19.303 00:11:19.303 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:19.303 ================================================================================= 00:11:19.303 1.00000% : 5898.240us 00:11:19.303 10.00000% : 6150.302us 00:11:19.303 25.00000% : 6326.745us 00:11:19.303 50.00000% : 6604.012us 00:11:19.303 75.00000% : 6906.486us 00:11:19.303 90.00000% : 7461.022us 00:11:19.303 95.00000% : 8519.680us 00:11:19.303 98.00000% : 9880.812us 00:11:19.303 99.00000% : 11897.305us 00:11:19.303 99.50000% : 15627.815us 00:11:19.303 99.90000% : 20568.222us 00:11:19.303 99.99000% : 20971.520us 00:11:19.303 99.99900% : 20971.520us 00:11:19.303 99.99990% : 20971.520us 00:11:19.303 99.99999% : 20971.520us 00:11:19.303 00:11:19.303 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:19.303 ============================================================================== 00:11:19.303 Range in us Cumulative IO count 00:11:19.303 5469.735 - 5494.942: 0.0107% ( 2) 00:11:19.303 5494.942 - 5520.148: 0.0376% ( 5) 00:11:19.303 5520.148 - 5545.354: 0.0591% ( 4) 00:11:19.303 5545.354 - 5570.560: 0.1074% ( 9) 00:11:19.303 5570.560 - 5595.766: 0.2040% ( 18) 00:11:19.303 5595.766 - 5620.972: 0.3061% ( 19) 00:11:19.303 5620.972 - 5646.178: 0.4027% ( 18) 00:11:19.303 5646.178 - 5671.385: 0.4725% ( 13) 00:11:19.303 5671.385 - 5696.591: 0.5423% ( 13) 00:11:19.303 5696.591 - 5721.797: 0.6282% ( 16) 00:11:19.303 5721.797 - 5747.003: 0.7463% ( 22) 00:11:19.303 5747.003 - 5772.209: 0.8323% ( 16) 00:11:19.303 5772.209 - 5797.415: 0.9611% ( 24) 00:11:19.303 5797.415 - 5822.622: 1.1491% ( 35) 00:11:19.303 5822.622 - 5847.828: 1.4068% ( 48) 00:11:19.303 5847.828 - 5873.034: 1.6484% ( 45) 00:11:19.303 5873.034 - 5898.240: 2.0082% ( 67) 00:11:19.303 5898.240 - 5923.446: 2.5290% ( 97) 00:11:19.303 5923.446 - 5948.652: 3.2378% ( 132) 00:11:19.303 5948.652 - 5973.858: 4.1559% ( 171) 00:11:19.303 5973.858 - 5999.065: 5.3426% ( 221) 00:11:19.303 5999.065 - 6024.271: 6.5614% ( 227) 00:11:19.303 6024.271 - 6049.477: 7.8071% ( 232) 00:11:19.303 6049.477 - 6074.683: 9.2408% ( 267) 00:11:19.303 6074.683 - 6099.889: 10.8462% ( 299) 00:11:19.303 6099.889 - 6125.095: 12.4570% ( 300) 00:11:19.303 6125.095 - 6150.302: 14.1538% ( 316) 00:11:19.303 6150.302 - 6175.508: 15.8022% ( 307) 00:11:19.303 6175.508 - 6200.714: 17.7889% ( 370) 00:11:19.303 6200.714 - 6225.920: 19.8024% ( 375) 00:11:19.303 6225.920 - 6251.126: 21.7032% ( 354) 00:11:19.303 6251.126 - 6276.332: 23.6952% ( 371) 00:11:19.303 6276.332 - 6301.538: 25.5101% ( 338) 00:11:19.303 6301.538 - 6326.745: 27.5397% ( 378) 00:11:19.303 6326.745 - 6351.951: 29.5318% ( 371) 00:11:19.303 6351.951 - 6377.157: 31.6044% ( 386) 00:11:19.303 6377.157 - 6402.363: 33.6018% ( 372) 00:11:19.303 6402.363 - 6427.569: 35.6583% ( 383) 00:11:19.303 6427.569 - 6452.775: 37.7846% ( 396) 00:11:19.303 6452.775 - 6503.188: 41.7902% ( 746) 00:11:19.303 6503.188 - 6553.600: 46.0535% ( 794) 00:11:19.303 6553.600 - 6604.012: 50.1772% ( 768) 00:11:19.303 6604.012 - 6654.425: 54.2633% ( 761) 00:11:19.303 6654.425 - 6704.837: 58.4461% ( 779) 00:11:19.303 6704.837 - 6755.249: 62.6289% ( 779) 00:11:19.303 6755.249 - 6805.662: 66.9190% ( 799) 00:11:19.303 6805.662 - 6856.074: 70.8333% ( 729) 00:11:19.303 6856.074 - 6906.486: 74.6832% ( 717) 00:11:19.303 6906.486 - 6956.898: 78.0606% ( 629) 00:11:19.303 6956.898 - 7007.311: 80.7668% ( 504) 00:11:19.303 7007.311 - 7057.723: 83.0219% ( 420) 00:11:19.303 7057.723 - 7108.135: 84.8046% ( 332) 00:11:19.303 7108.135 - 7158.548: 86.1684% ( 254) 00:11:19.303 7158.548 - 7208.960: 87.2369% ( 199) 00:11:19.303 7208.960 - 7259.372: 87.9510% ( 133) 00:11:19.303 7259.372 - 7309.785: 88.5685% ( 115) 00:11:19.303 7309.785 - 7360.197: 89.1645% ( 111) 00:11:19.303 7360.197 - 7410.609: 89.7552% ( 110) 00:11:19.303 7410.609 - 7461.022: 90.2599% ( 94) 00:11:19.303 7461.022 - 7511.434: 90.6787% ( 78) 00:11:19.303 7511.434 - 7561.846: 91.0546% ( 70) 00:11:19.303 7561.846 - 7612.258: 91.3767% ( 60) 00:11:19.303 7612.258 - 7662.671: 91.6559% ( 52) 00:11:19.303 7662.671 - 7713.083: 91.9298% ( 51) 00:11:19.303 7713.083 - 7763.495: 92.1392% ( 39) 00:11:19.303 7763.495 - 7813.908: 92.4023% ( 49) 00:11:19.303 7813.908 - 7864.320: 92.6332% ( 43) 00:11:19.303 7864.320 - 7914.732: 92.8909% ( 48) 00:11:19.303 7914.732 - 7965.145: 93.1218% ( 43) 00:11:19.303 7965.145 - 8015.557: 93.3741% ( 47) 00:11:19.303 8015.557 - 8065.969: 93.5997% ( 42) 00:11:19.303 8065.969 - 8116.382: 93.8359% ( 44) 00:11:19.303 8116.382 - 8166.794: 94.0722% ( 44) 00:11:19.303 8166.794 - 8217.206: 94.3030% ( 43) 00:11:19.303 8217.206 - 8267.618: 94.5178% ( 40) 00:11:19.303 8267.618 - 8318.031: 94.7595% ( 45) 00:11:19.303 8318.031 - 8368.443: 94.9259% ( 31) 00:11:19.303 8368.443 - 8418.855: 95.0924% ( 31) 00:11:19.303 8418.855 - 8469.268: 95.2159% ( 23) 00:11:19.303 8469.268 - 8519.680: 95.3555% ( 26) 00:11:19.303 8519.680 - 8570.092: 95.4897% ( 25) 00:11:19.304 8570.092 - 8620.505: 95.5917% ( 19) 00:11:19.304 8620.505 - 8670.917: 95.7367% ( 27) 00:11:19.304 8670.917 - 8721.329: 95.8548% ( 22) 00:11:19.304 8721.329 - 8771.742: 95.9837% ( 24) 00:11:19.304 8771.742 - 8822.154: 96.0964% ( 21) 00:11:19.304 8822.154 - 8872.566: 96.2146% ( 22) 00:11:19.304 8872.566 - 8922.978: 96.3220% ( 20) 00:11:19.304 8922.978 - 8973.391: 96.4508% ( 24) 00:11:19.304 8973.391 - 9023.803: 96.5475% ( 18) 00:11:19.304 9023.803 - 9074.215: 96.6710% ( 23) 00:11:19.304 9074.215 - 9124.628: 96.7676% ( 18) 00:11:19.304 9124.628 - 9175.040: 96.8750% ( 20) 00:11:19.304 9175.040 - 9225.452: 96.9931% ( 22) 00:11:19.304 9225.452 - 9275.865: 97.0844% ( 17) 00:11:19.304 9275.865 - 9326.277: 97.1918% ( 20) 00:11:19.304 9326.277 - 9376.689: 97.2777% ( 16) 00:11:19.304 9376.689 - 9427.102: 97.3475% ( 13) 00:11:19.304 9427.102 - 9477.514: 97.4280% ( 15) 00:11:19.304 9477.514 - 9527.926: 97.5032% ( 14) 00:11:19.304 9527.926 - 9578.338: 97.5891% ( 16) 00:11:19.304 9578.338 - 9628.751: 97.6589% ( 13) 00:11:19.304 9628.751 - 9679.163: 97.7287% ( 13) 00:11:19.304 9679.163 - 9729.575: 97.8093% ( 15) 00:11:19.304 9729.575 - 9779.988: 97.9167% ( 20) 00:11:19.304 9779.988 - 9830.400: 97.9865% ( 13) 00:11:19.304 9830.400 - 9880.812: 98.0455% ( 11) 00:11:19.304 9880.812 - 9931.225: 98.1261% ( 15) 00:11:19.304 9931.225 - 9981.637: 98.1798% ( 10) 00:11:19.304 9981.637 - 10032.049: 98.2388% ( 11) 00:11:19.304 10032.049 - 10082.462: 98.2764% ( 7) 00:11:19.304 10082.462 - 10132.874: 98.3086% ( 6) 00:11:19.304 10132.874 - 10183.286: 98.3409% ( 6) 00:11:19.304 10183.286 - 10233.698: 98.3623% ( 4) 00:11:19.304 10233.698 - 10284.111: 98.3945% ( 6) 00:11:19.304 10284.111 - 10334.523: 98.4214% ( 5) 00:11:19.304 10334.523 - 10384.935: 98.4321% ( 2) 00:11:19.304 10384.935 - 10435.348: 98.4429% ( 2) 00:11:19.304 10435.348 - 10485.760: 98.4536% ( 2) 00:11:19.304 10485.760 - 10536.172: 98.4643% ( 2) 00:11:19.304 10536.172 - 10586.585: 98.4751% ( 2) 00:11:19.304 10586.585 - 10636.997: 98.4858% ( 2) 00:11:19.304 10636.997 - 10687.409: 98.5019% ( 3) 00:11:19.304 10687.409 - 10737.822: 98.5073% ( 1) 00:11:19.304 10737.822 - 10788.234: 98.5234% ( 3) 00:11:19.304 10788.234 - 10838.646: 98.5288% ( 1) 00:11:19.304 10838.646 - 10889.058: 98.5395% ( 2) 00:11:19.304 10889.058 - 10939.471: 98.5503% ( 2) 00:11:19.304 10939.471 - 10989.883: 98.5610% ( 2) 00:11:19.304 10989.883 - 11040.295: 98.5878% ( 5) 00:11:19.304 11040.295 - 11090.708: 98.6093% ( 4) 00:11:19.304 11090.708 - 11141.120: 98.6523% ( 8) 00:11:19.304 11141.120 - 11191.532: 98.6952% ( 8) 00:11:19.304 11191.532 - 11241.945: 98.7328% ( 7) 00:11:19.304 11241.945 - 11292.357: 98.7811% ( 9) 00:11:19.304 11292.357 - 11342.769: 98.8026% ( 4) 00:11:19.304 11342.769 - 11393.182: 98.8241% ( 4) 00:11:19.304 11393.182 - 11443.594: 98.8563% ( 6) 00:11:19.304 11443.594 - 11494.006: 98.8724% ( 3) 00:11:19.304 11494.006 - 11544.418: 98.9100% ( 7) 00:11:19.304 11544.418 - 11594.831: 98.9369% ( 5) 00:11:19.304 11594.831 - 11645.243: 98.9637% ( 5) 00:11:19.304 11645.243 - 11695.655: 98.9852% ( 4) 00:11:19.304 11695.655 - 11746.068: 99.0174% ( 6) 00:11:19.304 11746.068 - 11796.480: 99.0335% ( 3) 00:11:19.304 11796.480 - 11846.892: 99.0711% ( 7) 00:11:19.304 11846.892 - 11897.305: 99.0926% ( 4) 00:11:19.304 11897.305 - 11947.717: 99.1194% ( 5) 00:11:19.304 11947.717 - 11998.129: 99.1570% ( 7) 00:11:19.304 11998.129 - 12048.542: 99.1731% ( 3) 00:11:19.304 12048.542 - 12098.954: 99.1785% ( 1) 00:11:19.304 12098.954 - 12149.366: 99.1946% ( 3) 00:11:19.304 12149.366 - 12199.778: 99.2053% ( 2) 00:11:19.304 12199.778 - 12250.191: 99.2161% ( 2) 00:11:19.304 12250.191 - 12300.603: 99.2268% ( 2) 00:11:19.304 12300.603 - 12351.015: 99.2375% ( 2) 00:11:19.304 12351.015 - 12401.428: 99.2429% ( 1) 00:11:19.304 12401.428 - 12451.840: 99.2537% ( 2) 00:11:19.304 12451.840 - 12502.252: 99.2644% ( 2) 00:11:19.304 12502.252 - 12552.665: 99.2751% ( 2) 00:11:19.304 12552.665 - 12603.077: 99.2859% ( 2) 00:11:19.304 12603.077 - 12653.489: 99.3020% ( 3) 00:11:19.304 12653.489 - 12703.902: 99.3127% ( 2) 00:11:19.304 26416.049 - 26617.698: 99.3503% ( 7) 00:11:19.304 26617.698 - 26819.348: 99.3933% ( 8) 00:11:19.304 26819.348 - 27020.997: 99.4362% ( 8) 00:11:19.304 27020.997 - 27222.646: 99.4792% ( 8) 00:11:19.304 27222.646 - 27424.295: 99.5168% ( 7) 00:11:19.304 27424.295 - 27625.945: 99.5597% ( 8) 00:11:19.304 27625.945 - 27827.594: 99.5973% ( 7) 00:11:19.304 27827.594 - 28029.243: 99.6456% ( 9) 00:11:19.304 28029.243 - 28230.892: 99.6564% ( 2) 00:11:19.304 31053.982 - 31255.631: 99.6778% ( 4) 00:11:19.304 31255.631 - 31457.280: 99.7208% ( 8) 00:11:19.304 31457.280 - 31658.929: 99.7584% ( 7) 00:11:19.304 31658.929 - 31860.578: 99.8013% ( 8) 00:11:19.304 31860.578 - 32062.228: 99.8497% ( 9) 00:11:19.304 32062.228 - 32263.877: 99.8926% ( 8) 00:11:19.304 32263.877 - 32465.526: 99.9356% ( 8) 00:11:19.304 32465.526 - 32667.175: 99.9785% ( 8) 00:11:19.304 32667.175 - 32868.825: 100.0000% ( 4) 00:11:19.304 00:11:19.304 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:19.304 ============================================================================== 00:11:19.304 Range in us Cumulative IO count 00:11:19.304 5545.354 - 5570.560: 0.0107% ( 2) 00:11:19.304 5570.560 - 5595.766: 0.0483% ( 7) 00:11:19.304 5595.766 - 5620.972: 0.0913% ( 8) 00:11:19.304 5620.972 - 5646.178: 0.1235% ( 6) 00:11:19.304 5646.178 - 5671.385: 0.2040% ( 15) 00:11:19.304 5671.385 - 5696.591: 0.2846% ( 15) 00:11:19.304 5696.591 - 5721.797: 0.4081% ( 23) 00:11:19.304 5721.797 - 5747.003: 0.5262% ( 22) 00:11:19.304 5747.003 - 5772.209: 0.6282% ( 19) 00:11:19.304 5772.209 - 5797.415: 0.7088% ( 15) 00:11:19.304 5797.415 - 5822.622: 0.8054% ( 18) 00:11:19.304 5822.622 - 5847.828: 0.9504% ( 27) 00:11:19.304 5847.828 - 5873.034: 1.1276% ( 33) 00:11:19.304 5873.034 - 5898.240: 1.3048% ( 33) 00:11:19.304 5898.240 - 5923.446: 1.5088% ( 38) 00:11:19.304 5923.446 - 5948.652: 1.8149% ( 57) 00:11:19.304 5948.652 - 5973.858: 2.1800% ( 68) 00:11:19.304 5973.858 - 5999.065: 2.7008% ( 97) 00:11:19.304 5999.065 - 6024.271: 3.4418% ( 138) 00:11:19.304 6024.271 - 6049.477: 4.3814% ( 175) 00:11:19.304 6049.477 - 6074.683: 5.5198% ( 212) 00:11:19.304 6074.683 - 6099.889: 6.9265% ( 262) 00:11:19.304 6099.889 - 6125.095: 8.5642% ( 305) 00:11:19.304 6125.095 - 6150.302: 10.3308% ( 329) 00:11:19.304 6150.302 - 6175.508: 12.4839% ( 401) 00:11:19.304 6175.508 - 6200.714: 14.2343% ( 326) 00:11:19.304 6200.714 - 6225.920: 16.0760% ( 343) 00:11:19.304 6225.920 - 6251.126: 18.1057% ( 378) 00:11:19.304 6251.126 - 6276.332: 20.1622% ( 383) 00:11:19.304 6276.332 - 6301.538: 22.3744% ( 412) 00:11:19.304 6301.538 - 6326.745: 24.8711% ( 465) 00:11:19.304 6326.745 - 6351.951: 27.3518% ( 462) 00:11:19.304 6351.951 - 6377.157: 29.7197% ( 441) 00:11:19.304 6377.157 - 6402.363: 32.0447% ( 433) 00:11:19.304 6402.363 - 6427.569: 34.2730% ( 415) 00:11:19.304 6427.569 - 6452.775: 36.6248% ( 438) 00:11:19.304 6452.775 - 6503.188: 41.3767% ( 885) 00:11:19.304 6503.188 - 6553.600: 46.2146% ( 901) 00:11:19.304 6553.600 - 6604.012: 50.9343% ( 879) 00:11:19.304 6604.012 - 6654.425: 55.7829% ( 903) 00:11:19.304 6654.425 - 6704.837: 60.5724% ( 892) 00:11:19.304 6704.837 - 6755.249: 65.3404% ( 888) 00:11:19.304 6755.249 - 6805.662: 69.9635% ( 861) 00:11:19.304 6805.662 - 6856.074: 74.0926% ( 769) 00:11:19.304 6856.074 - 6906.486: 77.5666% ( 647) 00:11:19.304 6906.486 - 6956.898: 80.3265% ( 514) 00:11:19.304 6956.898 - 7007.311: 82.3829% ( 383) 00:11:19.304 7007.311 - 7057.723: 84.1119% ( 322) 00:11:19.304 7057.723 - 7108.135: 85.6261% ( 282) 00:11:19.304 7108.135 - 7158.548: 86.5496% ( 172) 00:11:19.304 7158.548 - 7208.960: 87.2852% ( 137) 00:11:19.304 7208.960 - 7259.372: 87.9296% ( 120) 00:11:19.304 7259.372 - 7309.785: 88.4772% ( 102) 00:11:19.304 7309.785 - 7360.197: 88.9820% ( 94) 00:11:19.304 7360.197 - 7410.609: 89.4598% ( 89) 00:11:19.304 7410.609 - 7461.022: 89.8411% ( 71) 00:11:19.304 7461.022 - 7511.434: 90.2491% ( 76) 00:11:19.304 7511.434 - 7561.846: 90.6304% ( 71) 00:11:19.304 7561.846 - 7612.258: 90.9257% ( 55) 00:11:19.304 7612.258 - 7662.671: 91.2103% ( 53) 00:11:19.304 7662.671 - 7713.083: 91.5324% ( 60) 00:11:19.304 7713.083 - 7763.495: 91.8439% ( 58) 00:11:19.304 7763.495 - 7813.908: 92.1553% ( 58) 00:11:19.304 7813.908 - 7864.320: 92.5097% ( 66) 00:11:19.304 7864.320 - 7914.732: 92.8426% ( 62) 00:11:19.304 7914.732 - 7965.145: 93.1647% ( 60) 00:11:19.304 7965.145 - 8015.557: 93.4332% ( 50) 00:11:19.304 8015.557 - 8065.969: 93.6856% ( 47) 00:11:19.304 8065.969 - 8116.382: 93.9487% ( 49) 00:11:19.304 8116.382 - 8166.794: 94.1957% ( 46) 00:11:19.304 8166.794 - 8217.206: 94.4641% ( 50) 00:11:19.304 8217.206 - 8267.618: 94.6843% ( 41) 00:11:19.304 8267.618 - 8318.031: 94.8883% ( 38) 00:11:19.304 8318.031 - 8368.443: 95.1031% ( 40) 00:11:19.304 8368.443 - 8418.855: 95.2964% ( 36) 00:11:19.304 8418.855 - 8469.268: 95.4951% ( 37) 00:11:19.304 8469.268 - 8519.680: 95.6561% ( 30) 00:11:19.304 8519.680 - 8570.092: 95.8065% ( 28) 00:11:19.304 8570.092 - 8620.505: 95.9729% ( 31) 00:11:19.304 8620.505 - 8670.917: 96.1233% ( 28) 00:11:19.304 8670.917 - 8721.329: 96.2414% ( 22) 00:11:19.304 8721.329 - 8771.742: 96.3434% ( 19) 00:11:19.304 8771.742 - 8822.154: 96.4616% ( 22) 00:11:19.304 8822.154 - 8872.566: 96.5689% ( 20) 00:11:19.305 8872.566 - 8922.978: 96.6763% ( 20) 00:11:19.305 8922.978 - 8973.391: 96.7945% ( 22) 00:11:19.305 8973.391 - 9023.803: 96.9018% ( 20) 00:11:19.305 9023.803 - 9074.215: 97.0039% ( 19) 00:11:19.305 9074.215 - 9124.628: 97.0951% ( 17) 00:11:19.305 9124.628 - 9175.040: 97.1918% ( 18) 00:11:19.305 9175.040 - 9225.452: 97.2992% ( 20) 00:11:19.305 9225.452 - 9275.865: 97.4066% ( 20) 00:11:19.305 9275.865 - 9326.277: 97.4871% ( 15) 00:11:19.305 9326.277 - 9376.689: 97.5408% ( 10) 00:11:19.305 9376.689 - 9427.102: 97.6106% ( 13) 00:11:19.305 9427.102 - 9477.514: 97.6912% ( 15) 00:11:19.305 9477.514 - 9527.926: 97.7448% ( 10) 00:11:19.305 9527.926 - 9578.338: 97.7985% ( 10) 00:11:19.305 9578.338 - 9628.751: 97.8576% ( 11) 00:11:19.305 9628.751 - 9679.163: 97.9113% ( 10) 00:11:19.305 9679.163 - 9729.575: 97.9543% ( 8) 00:11:19.305 9729.575 - 9779.988: 97.9865% ( 6) 00:11:19.305 9779.988 - 9830.400: 98.0294% ( 8) 00:11:19.305 9830.400 - 9880.812: 98.0670% ( 7) 00:11:19.305 9880.812 - 9931.225: 98.1046% ( 7) 00:11:19.305 9931.225 - 9981.637: 98.1476% ( 8) 00:11:19.305 9981.637 - 10032.049: 98.1851% ( 7) 00:11:19.305 10032.049 - 10082.462: 98.2174% ( 6) 00:11:19.305 10082.462 - 10132.874: 98.2388% ( 4) 00:11:19.305 10132.874 - 10183.286: 98.2549% ( 3) 00:11:19.305 10183.286 - 10233.698: 98.2764% ( 4) 00:11:19.305 10233.698 - 10284.111: 98.2818% ( 1) 00:11:19.305 10435.348 - 10485.760: 98.2872% ( 1) 00:11:19.305 10485.760 - 10536.172: 98.3086% ( 4) 00:11:19.305 10536.172 - 10586.585: 98.3247% ( 3) 00:11:19.305 10586.585 - 10636.997: 98.3623% ( 7) 00:11:19.305 10636.997 - 10687.409: 98.3731% ( 2) 00:11:19.305 10687.409 - 10737.822: 98.3892% ( 3) 00:11:19.305 10737.822 - 10788.234: 98.4160% ( 5) 00:11:19.305 10788.234 - 10838.646: 98.4429% ( 5) 00:11:19.305 10838.646 - 10889.058: 98.4697% ( 5) 00:11:19.305 10889.058 - 10939.471: 98.4912% ( 4) 00:11:19.305 10939.471 - 10989.883: 98.5180% ( 5) 00:11:19.305 10989.883 - 11040.295: 98.5717% ( 10) 00:11:19.305 11040.295 - 11090.708: 98.6147% ( 8) 00:11:19.305 11090.708 - 11141.120: 98.6576% ( 8) 00:11:19.305 11141.120 - 11191.532: 98.7006% ( 8) 00:11:19.305 11191.532 - 11241.945: 98.7489% ( 9) 00:11:19.305 11241.945 - 11292.357: 98.7973% ( 9) 00:11:19.305 11292.357 - 11342.769: 98.8509% ( 10) 00:11:19.305 11342.769 - 11393.182: 98.8939% ( 8) 00:11:19.305 11393.182 - 11443.594: 98.9369% ( 8) 00:11:19.305 11443.594 - 11494.006: 98.9852% ( 9) 00:11:19.305 11494.006 - 11544.418: 99.0335% ( 9) 00:11:19.305 11544.418 - 11594.831: 99.0657% ( 6) 00:11:19.305 11594.831 - 11645.243: 99.1033% ( 7) 00:11:19.305 11645.243 - 11695.655: 99.1516% ( 9) 00:11:19.305 11695.655 - 11746.068: 99.2000% ( 9) 00:11:19.305 11746.068 - 11796.480: 99.2429% ( 8) 00:11:19.305 11796.480 - 11846.892: 99.2805% ( 7) 00:11:19.305 11846.892 - 11897.305: 99.2912% ( 2) 00:11:19.305 11897.305 - 11947.717: 99.3020% ( 2) 00:11:19.305 11947.717 - 11998.129: 99.3127% ( 2) 00:11:19.305 24802.855 - 24903.680: 99.3288% ( 3) 00:11:19.305 24903.680 - 25004.505: 99.3503% ( 4) 00:11:19.305 25004.505 - 25105.329: 99.3718% ( 4) 00:11:19.305 25105.329 - 25206.154: 99.3933% ( 4) 00:11:19.305 25206.154 - 25306.978: 99.4147% ( 4) 00:11:19.305 25306.978 - 25407.803: 99.4416% ( 5) 00:11:19.305 25407.803 - 25508.628: 99.4631% ( 4) 00:11:19.305 25508.628 - 25609.452: 99.4845% ( 4) 00:11:19.305 25609.452 - 25710.277: 99.5114% ( 5) 00:11:19.305 25710.277 - 25811.102: 99.5329% ( 4) 00:11:19.305 25811.102 - 26012.751: 99.5812% ( 9) 00:11:19.305 26012.751 - 26214.400: 99.6241% ( 8) 00:11:19.305 26214.400 - 26416.049: 99.6564% ( 6) 00:11:19.305 29440.788 - 29642.437: 99.6886% ( 6) 00:11:19.305 29642.437 - 29844.086: 99.7369% ( 9) 00:11:19.305 29844.086 - 30045.735: 99.7745% ( 7) 00:11:19.305 30045.735 - 30247.385: 99.8228% ( 9) 00:11:19.305 30247.385 - 30449.034: 99.8711% ( 9) 00:11:19.305 30449.034 - 30650.683: 99.9195% ( 9) 00:11:19.305 30650.683 - 30852.332: 99.9624% ( 8) 00:11:19.305 30852.332 - 31053.982: 100.0000% ( 7) 00:11:19.305 00:11:19.305 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:19.305 ============================================================================== 00:11:19.305 Range in us Cumulative IO count 00:11:19.305 5545.354 - 5570.560: 0.0107% ( 2) 00:11:19.305 5570.560 - 5595.766: 0.0215% ( 2) 00:11:19.305 5595.766 - 5620.972: 0.0322% ( 2) 00:11:19.305 5620.972 - 5646.178: 0.0537% ( 4) 00:11:19.305 5646.178 - 5671.385: 0.1503% ( 18) 00:11:19.305 5671.385 - 5696.591: 0.2577% ( 20) 00:11:19.305 5696.591 - 5721.797: 0.3759% ( 22) 00:11:19.305 5721.797 - 5747.003: 0.5155% ( 26) 00:11:19.305 5747.003 - 5772.209: 0.6497% ( 25) 00:11:19.305 5772.209 - 5797.415: 0.7893% ( 26) 00:11:19.305 5797.415 - 5822.622: 0.9021% ( 21) 00:11:19.305 5822.622 - 5847.828: 1.0095% ( 20) 00:11:19.305 5847.828 - 5873.034: 1.1437% ( 25) 00:11:19.305 5873.034 - 5898.240: 1.3585% ( 40) 00:11:19.305 5898.240 - 5923.446: 1.5357% ( 33) 00:11:19.305 5923.446 - 5948.652: 1.7826% ( 46) 00:11:19.305 5948.652 - 5973.858: 2.1209% ( 63) 00:11:19.305 5973.858 - 5999.065: 2.6256% ( 94) 00:11:19.305 5999.065 - 6024.271: 3.2807% ( 122) 00:11:19.305 6024.271 - 6049.477: 4.3707% ( 203) 00:11:19.305 6049.477 - 6074.683: 5.6970% ( 247) 00:11:19.305 6074.683 - 6099.889: 7.0178% ( 246) 00:11:19.305 6099.889 - 6125.095: 8.5481% ( 285) 00:11:19.305 6125.095 - 6150.302: 10.2556% ( 318) 00:11:19.305 6150.302 - 6175.508: 12.1027% ( 344) 00:11:19.305 6175.508 - 6200.714: 14.1484% ( 381) 00:11:19.305 6200.714 - 6225.920: 16.2264% ( 387) 00:11:19.305 6225.920 - 6251.126: 18.3634% ( 398) 00:11:19.305 6251.126 - 6276.332: 20.6669% ( 429) 00:11:19.305 6276.332 - 6301.538: 22.8361% ( 404) 00:11:19.305 6301.538 - 6326.745: 25.1611% ( 433) 00:11:19.305 6326.745 - 6351.951: 27.6418% ( 462) 00:11:19.305 6351.951 - 6377.157: 30.1063% ( 459) 00:11:19.305 6377.157 - 6402.363: 32.4742% ( 441) 00:11:19.305 6402.363 - 6427.569: 34.9066% ( 453) 00:11:19.305 6427.569 - 6452.775: 37.4302% ( 470) 00:11:19.305 6452.775 - 6503.188: 42.2680% ( 901) 00:11:19.305 6503.188 - 6553.600: 46.9824% ( 878) 00:11:19.305 6553.600 - 6604.012: 51.7826% ( 894) 00:11:19.305 6604.012 - 6654.425: 56.5722% ( 892) 00:11:19.305 6654.425 - 6704.837: 61.4046% ( 900) 00:11:19.305 6704.837 - 6755.249: 66.1727% ( 888) 00:11:19.305 6755.249 - 6805.662: 70.7957% ( 861) 00:11:19.305 6805.662 - 6856.074: 75.1235% ( 806) 00:11:19.305 6856.074 - 6906.486: 78.5814% ( 644) 00:11:19.305 6906.486 - 6956.898: 81.2768% ( 502) 00:11:19.305 6956.898 - 7007.311: 83.3065% ( 378) 00:11:19.305 7007.311 - 7057.723: 84.9334% ( 303) 00:11:19.305 7057.723 - 7108.135: 86.1469% ( 226) 00:11:19.305 7108.135 - 7158.548: 87.0060% ( 160) 00:11:19.305 7158.548 - 7208.960: 87.6181% ( 114) 00:11:19.305 7208.960 - 7259.372: 88.2249% ( 113) 00:11:19.305 7259.372 - 7309.785: 88.7081% ( 90) 00:11:19.305 7309.785 - 7360.197: 89.1484% ( 82) 00:11:19.305 7360.197 - 7410.609: 89.5672% ( 78) 00:11:19.305 7410.609 - 7461.022: 89.9753% ( 76) 00:11:19.305 7461.022 - 7511.434: 90.2867% ( 58) 00:11:19.305 7511.434 - 7561.846: 90.6035% ( 59) 00:11:19.305 7561.846 - 7612.258: 90.8935% ( 54) 00:11:19.305 7612.258 - 7662.671: 91.1942% ( 56) 00:11:19.305 7662.671 - 7713.083: 91.4841% ( 54) 00:11:19.305 7713.083 - 7763.495: 91.7365% ( 47) 00:11:19.305 7763.495 - 7813.908: 92.0049% ( 50) 00:11:19.305 7813.908 - 7864.320: 92.2519% ( 46) 00:11:19.305 7864.320 - 7914.732: 92.4774% ( 42) 00:11:19.305 7914.732 - 7965.145: 92.7030% ( 42) 00:11:19.305 7965.145 - 8015.557: 92.8640% ( 30) 00:11:19.305 8015.557 - 8065.969: 93.1003% ( 44) 00:11:19.305 8065.969 - 8116.382: 93.3741% ( 51) 00:11:19.305 8116.382 - 8166.794: 93.6211% ( 46) 00:11:19.305 8166.794 - 8217.206: 93.8359% ( 40) 00:11:19.305 8217.206 - 8267.618: 94.0883% ( 47) 00:11:19.305 8267.618 - 8318.031: 94.3782% ( 54) 00:11:19.305 8318.031 - 8368.443: 94.5984% ( 41) 00:11:19.305 8368.443 - 8418.855: 94.7809% ( 34) 00:11:19.305 8418.855 - 8469.268: 95.0011% ( 41) 00:11:19.305 8469.268 - 8519.680: 95.1890% ( 35) 00:11:19.305 8519.680 - 8570.092: 95.3877% ( 37) 00:11:19.305 8570.092 - 8620.505: 95.5971% ( 39) 00:11:19.305 8620.505 - 8670.917: 95.7635% ( 31) 00:11:19.305 8670.917 - 8721.329: 95.9944% ( 43) 00:11:19.305 8721.329 - 8771.742: 96.1662% ( 32) 00:11:19.305 8771.742 - 8822.154: 96.3327% ( 31) 00:11:19.305 8822.154 - 8872.566: 96.4830% ( 28) 00:11:19.305 8872.566 - 8922.978: 96.6549% ( 32) 00:11:19.305 8922.978 - 8973.391: 96.8052% ( 28) 00:11:19.305 8973.391 - 9023.803: 96.9233% ( 22) 00:11:19.305 9023.803 - 9074.215: 97.0361% ( 21) 00:11:19.305 9074.215 - 9124.628: 97.1327% ( 18) 00:11:19.305 9124.628 - 9175.040: 97.2025% ( 13) 00:11:19.305 9175.040 - 9225.452: 97.2777% ( 14) 00:11:19.305 9225.452 - 9275.865: 97.3475% ( 13) 00:11:19.305 9275.865 - 9326.277: 97.4227% ( 14) 00:11:19.305 9326.277 - 9376.689: 97.4979% ( 14) 00:11:19.305 9376.689 - 9427.102: 97.5784% ( 15) 00:11:19.305 9427.102 - 9477.514: 97.6375% ( 11) 00:11:19.305 9477.514 - 9527.926: 97.6750% ( 7) 00:11:19.305 9527.926 - 9578.338: 97.7073% ( 6) 00:11:19.305 9578.338 - 9628.751: 97.7502% ( 8) 00:11:19.305 9628.751 - 9679.163: 97.7878% ( 7) 00:11:19.305 9679.163 - 9729.575: 97.8308% ( 8) 00:11:19.306 9729.575 - 9779.988: 97.8683% ( 7) 00:11:19.306 9779.988 - 9830.400: 97.8952% ( 5) 00:11:19.306 9830.400 - 9880.812: 97.9113% ( 3) 00:11:19.306 9880.812 - 9931.225: 97.9328% ( 4) 00:11:19.306 9931.225 - 9981.637: 97.9381% ( 1) 00:11:19.306 9981.637 - 10032.049: 97.9543% ( 3) 00:11:19.306 10032.049 - 10082.462: 97.9704% ( 3) 00:11:19.306 10082.462 - 10132.874: 97.9918% ( 4) 00:11:19.306 10132.874 - 10183.286: 98.0133% ( 4) 00:11:19.306 10183.286 - 10233.698: 98.0294% ( 3) 00:11:19.306 10233.698 - 10284.111: 98.0509% ( 4) 00:11:19.306 10284.111 - 10334.523: 98.0724% ( 4) 00:11:19.306 10334.523 - 10384.935: 98.1046% ( 6) 00:11:19.306 10384.935 - 10435.348: 98.1368% ( 6) 00:11:19.306 10435.348 - 10485.760: 98.1798% ( 8) 00:11:19.306 10485.760 - 10536.172: 98.2227% ( 8) 00:11:19.306 10536.172 - 10586.585: 98.2764% ( 10) 00:11:19.306 10586.585 - 10636.997: 98.3194% ( 8) 00:11:19.306 10636.997 - 10687.409: 98.3838% ( 12) 00:11:19.306 10687.409 - 10737.822: 98.4536% ( 13) 00:11:19.306 10737.822 - 10788.234: 98.5180% ( 12) 00:11:19.306 10788.234 - 10838.646: 98.5771% ( 11) 00:11:19.306 10838.646 - 10889.058: 98.6308% ( 10) 00:11:19.306 10889.058 - 10939.471: 98.6791% ( 9) 00:11:19.306 10939.471 - 10989.883: 98.7274% ( 9) 00:11:19.306 10989.883 - 11040.295: 98.7704% ( 8) 00:11:19.306 11040.295 - 11090.708: 98.8134% ( 8) 00:11:19.306 11090.708 - 11141.120: 98.8563% ( 8) 00:11:19.306 11141.120 - 11191.532: 98.8939% ( 7) 00:11:19.306 11191.532 - 11241.945: 98.9369% ( 8) 00:11:19.306 11241.945 - 11292.357: 98.9637% ( 5) 00:11:19.306 11292.357 - 11342.769: 98.9905% ( 5) 00:11:19.306 11342.769 - 11393.182: 99.0120% ( 4) 00:11:19.306 11393.182 - 11443.594: 99.0389% ( 5) 00:11:19.306 11443.594 - 11494.006: 99.0657% ( 5) 00:11:19.306 11494.006 - 11544.418: 99.0926% ( 5) 00:11:19.306 11544.418 - 11594.831: 99.1140% ( 4) 00:11:19.306 11594.831 - 11645.243: 99.1409% ( 5) 00:11:19.306 11645.243 - 11695.655: 99.1677% ( 5) 00:11:19.306 11695.655 - 11746.068: 99.1946% ( 5) 00:11:19.306 11746.068 - 11796.480: 99.2161% ( 4) 00:11:19.306 11796.480 - 11846.892: 99.2375% ( 4) 00:11:19.306 11846.892 - 11897.305: 99.2698% ( 6) 00:11:19.306 11897.305 - 11947.717: 99.2912% ( 4) 00:11:19.306 11947.717 - 11998.129: 99.3020% ( 2) 00:11:19.306 11998.129 - 12048.542: 99.3127% ( 2) 00:11:19.306 23592.960 - 23693.785: 99.3235% ( 2) 00:11:19.306 23693.785 - 23794.609: 99.3449% ( 4) 00:11:19.306 23794.609 - 23895.434: 99.3664% ( 4) 00:11:19.306 23895.434 - 23996.258: 99.3879% ( 4) 00:11:19.306 23996.258 - 24097.083: 99.4094% ( 4) 00:11:19.306 24097.083 - 24197.908: 99.4308% ( 4) 00:11:19.306 24197.908 - 24298.732: 99.4523% ( 4) 00:11:19.306 24298.732 - 24399.557: 99.4792% ( 5) 00:11:19.306 24399.557 - 24500.382: 99.5006% ( 4) 00:11:19.306 24500.382 - 24601.206: 99.5221% ( 4) 00:11:19.306 24601.206 - 24702.031: 99.5436% ( 4) 00:11:19.306 24702.031 - 24802.855: 99.5651% ( 4) 00:11:19.306 24802.855 - 24903.680: 99.5919% ( 5) 00:11:19.306 24903.680 - 25004.505: 99.6134% ( 4) 00:11:19.306 25004.505 - 25105.329: 99.6349% ( 4) 00:11:19.306 25105.329 - 25206.154: 99.6564% ( 4) 00:11:19.306 28029.243 - 28230.892: 99.6617% ( 1) 00:11:19.306 28230.892 - 28432.542: 99.7047% ( 8) 00:11:19.306 28432.542 - 28634.191: 99.7423% ( 7) 00:11:19.306 28634.191 - 28835.840: 99.7852% ( 8) 00:11:19.306 28835.840 - 29037.489: 99.8282% ( 8) 00:11:19.306 29037.489 - 29239.138: 99.8765% ( 9) 00:11:19.306 29239.138 - 29440.788: 99.9195% ( 8) 00:11:19.306 29440.788 - 29642.437: 99.9678% ( 9) 00:11:19.306 29642.437 - 29844.086: 100.0000% ( 6) 00:11:19.306 00:11:19.306 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:19.306 ============================================================================== 00:11:19.306 Range in us Cumulative IO count 00:11:19.306 5545.354 - 5570.560: 0.0054% ( 1) 00:11:19.306 5570.560 - 5595.766: 0.0483% ( 8) 00:11:19.306 5595.766 - 5620.972: 0.0966% ( 9) 00:11:19.306 5620.972 - 5646.178: 0.1342% ( 7) 00:11:19.306 5646.178 - 5671.385: 0.1987% ( 12) 00:11:19.306 5671.385 - 5696.591: 0.2899% ( 17) 00:11:19.306 5696.591 - 5721.797: 0.4349% ( 27) 00:11:19.306 5721.797 - 5747.003: 0.5584% ( 23) 00:11:19.306 5747.003 - 5772.209: 0.6443% ( 16) 00:11:19.306 5772.209 - 5797.415: 0.7571% ( 21) 00:11:19.306 5797.415 - 5822.622: 0.8806% ( 23) 00:11:19.306 5822.622 - 5847.828: 1.0363% ( 29) 00:11:19.306 5847.828 - 5873.034: 1.1652% ( 24) 00:11:19.306 5873.034 - 5898.240: 1.3316% ( 31) 00:11:19.306 5898.240 - 5923.446: 1.5303% ( 37) 00:11:19.306 5923.446 - 5948.652: 1.8471% ( 59) 00:11:19.306 5948.652 - 5973.858: 2.1692% ( 60) 00:11:19.306 5973.858 - 5999.065: 2.6901% ( 97) 00:11:19.306 5999.065 - 6024.271: 3.4794% ( 147) 00:11:19.306 6024.271 - 6049.477: 4.3439% ( 161) 00:11:19.306 6049.477 - 6074.683: 5.4500% ( 206) 00:11:19.306 6074.683 - 6099.889: 6.9104% ( 272) 00:11:19.306 6099.889 - 6125.095: 8.4837% ( 293) 00:11:19.306 6125.095 - 6150.302: 10.3791% ( 353) 00:11:19.306 6150.302 - 6175.508: 12.1939% ( 338) 00:11:19.306 6175.508 - 6200.714: 14.2988% ( 392) 00:11:19.306 6200.714 - 6225.920: 16.5002% ( 410) 00:11:19.306 6225.920 - 6251.126: 18.7930% ( 427) 00:11:19.306 6251.126 - 6276.332: 20.9890% ( 409) 00:11:19.306 6276.332 - 6301.538: 23.2657% ( 424) 00:11:19.306 6301.538 - 6326.745: 25.6604% ( 446) 00:11:19.306 6326.745 - 6351.951: 28.0015% ( 436) 00:11:19.306 6351.951 - 6377.157: 30.3104% ( 430) 00:11:19.306 6377.157 - 6402.363: 32.6568% ( 437) 00:11:19.306 6402.363 - 6427.569: 35.0945% ( 454) 00:11:19.306 6427.569 - 6452.775: 37.5268% ( 453) 00:11:19.306 6452.775 - 6503.188: 42.3540% ( 899) 00:11:19.306 6503.188 - 6553.600: 47.1005% ( 884) 00:11:19.306 6553.600 - 6604.012: 51.7397% ( 864) 00:11:19.306 6604.012 - 6654.425: 56.5346% ( 893) 00:11:19.306 6654.425 - 6704.837: 61.2865% ( 885) 00:11:19.306 6704.837 - 6755.249: 66.0760% ( 892) 00:11:19.306 6755.249 - 6805.662: 70.6400% ( 850) 00:11:19.306 6805.662 - 6856.074: 74.8711% ( 788) 00:11:19.306 6856.074 - 6906.486: 78.3988% ( 657) 00:11:19.306 6906.486 - 6956.898: 81.2554% ( 532) 00:11:19.306 6956.898 - 7007.311: 83.4192% ( 403) 00:11:19.306 7007.311 - 7057.723: 85.0730% ( 308) 00:11:19.306 7057.723 - 7108.135: 86.3348% ( 235) 00:11:19.306 7108.135 - 7158.548: 87.1886% ( 159) 00:11:19.306 7158.548 - 7208.960: 87.7953% ( 113) 00:11:19.306 7208.960 - 7259.372: 88.3323% ( 100) 00:11:19.306 7259.372 - 7309.785: 88.8048% ( 88) 00:11:19.306 7309.785 - 7360.197: 89.2558% ( 84) 00:11:19.306 7360.197 - 7410.609: 89.6585% ( 75) 00:11:19.306 7410.609 - 7461.022: 90.1095% ( 84) 00:11:19.306 7461.022 - 7511.434: 90.4639% ( 66) 00:11:19.306 7511.434 - 7561.846: 90.7539% ( 54) 00:11:19.306 7561.846 - 7612.258: 91.0223% ( 50) 00:11:19.306 7612.258 - 7662.671: 91.2425% ( 41) 00:11:19.306 7662.671 - 7713.083: 91.4734% ( 43) 00:11:19.306 7713.083 - 7763.495: 91.6989% ( 42) 00:11:19.306 7763.495 - 7813.908: 91.9459% ( 46) 00:11:19.306 7813.908 - 7864.320: 92.1338% ( 35) 00:11:19.306 7864.320 - 7914.732: 92.3164% ( 34) 00:11:19.306 7914.732 - 7965.145: 92.4506% ( 25) 00:11:19.306 7965.145 - 8015.557: 92.6171% ( 31) 00:11:19.306 8015.557 - 8065.969: 92.8265% ( 39) 00:11:19.306 8065.969 - 8116.382: 93.0251% ( 37) 00:11:19.306 8116.382 - 8166.794: 93.2506% ( 42) 00:11:19.306 8166.794 - 8217.206: 93.4923% ( 45) 00:11:19.306 8217.206 - 8267.618: 93.7554% ( 49) 00:11:19.306 8267.618 - 8318.031: 93.9863% ( 43) 00:11:19.306 8318.031 - 8368.443: 94.2494% ( 49) 00:11:19.306 8368.443 - 8418.855: 94.5071% ( 48) 00:11:19.306 8418.855 - 8469.268: 94.7595% ( 47) 00:11:19.306 8469.268 - 8519.680: 95.0226% ( 49) 00:11:19.306 8519.680 - 8570.092: 95.2534% ( 43) 00:11:19.306 8570.092 - 8620.505: 95.4467% ( 36) 00:11:19.306 8620.505 - 8670.917: 95.6508% ( 38) 00:11:19.306 8670.917 - 8721.329: 95.8333% ( 34) 00:11:19.306 8721.329 - 8771.742: 96.0213% ( 35) 00:11:19.306 8771.742 - 8822.154: 96.2092% ( 35) 00:11:19.306 8822.154 - 8872.566: 96.3810% ( 32) 00:11:19.306 8872.566 - 8922.978: 96.5367% ( 29) 00:11:19.306 8922.978 - 8973.391: 96.6656% ( 24) 00:11:19.306 8973.391 - 9023.803: 96.7837% ( 22) 00:11:19.306 9023.803 - 9074.215: 96.8857% ( 19) 00:11:19.306 9074.215 - 9124.628: 96.9985% ( 21) 00:11:19.306 9124.628 - 9175.040: 97.1220% ( 23) 00:11:19.306 9175.040 - 9225.452: 97.2133% ( 17) 00:11:19.306 9225.452 - 9275.865: 97.3046% ( 17) 00:11:19.306 9275.865 - 9326.277: 97.3958% ( 17) 00:11:19.306 9326.277 - 9376.689: 97.4979% ( 19) 00:11:19.306 9376.689 - 9427.102: 97.5784% ( 15) 00:11:19.306 9427.102 - 9477.514: 97.6536% ( 14) 00:11:19.306 9477.514 - 9527.926: 97.7234% ( 13) 00:11:19.306 9527.926 - 9578.338: 97.7717% ( 9) 00:11:19.306 9578.338 - 9628.751: 97.8039% ( 6) 00:11:19.306 9628.751 - 9679.163: 97.8361% ( 6) 00:11:19.306 9679.163 - 9729.575: 97.8576% ( 4) 00:11:19.306 9729.575 - 9779.988: 97.8791% ( 4) 00:11:19.306 9779.988 - 9830.400: 97.9006% ( 4) 00:11:19.306 9830.400 - 9880.812: 97.9167% ( 3) 00:11:19.306 9880.812 - 9931.225: 97.9381% ( 4) 00:11:19.306 10032.049 - 10082.462: 97.9704% ( 6) 00:11:19.306 10082.462 - 10132.874: 97.9918% ( 4) 00:11:19.306 10132.874 - 10183.286: 98.0133% ( 4) 00:11:19.306 10183.286 - 10233.698: 98.0402% ( 5) 00:11:19.306 10233.698 - 10284.111: 98.0724% ( 6) 00:11:19.306 10284.111 - 10334.523: 98.1100% ( 7) 00:11:19.306 10334.523 - 10384.935: 98.1690% ( 11) 00:11:19.307 10384.935 - 10435.348: 98.2227% ( 10) 00:11:19.307 10435.348 - 10485.760: 98.2710% ( 9) 00:11:19.307 10485.760 - 10536.172: 98.3194% ( 9) 00:11:19.307 10536.172 - 10586.585: 98.3731% ( 10) 00:11:19.307 10586.585 - 10636.997: 98.4321% ( 11) 00:11:19.307 10636.997 - 10687.409: 98.4912% ( 11) 00:11:19.307 10687.409 - 10737.822: 98.5449% ( 10) 00:11:19.307 10737.822 - 10788.234: 98.5932% ( 9) 00:11:19.307 10788.234 - 10838.646: 98.6469% ( 10) 00:11:19.307 10838.646 - 10889.058: 98.6952% ( 9) 00:11:19.307 10889.058 - 10939.471: 98.7489% ( 10) 00:11:19.307 10939.471 - 10989.883: 98.7973% ( 9) 00:11:19.307 10989.883 - 11040.295: 98.8295% ( 6) 00:11:19.307 11040.295 - 11090.708: 98.8617% ( 6) 00:11:19.307 11090.708 - 11141.120: 98.8939% ( 6) 00:11:19.307 11141.120 - 11191.532: 98.9154% ( 4) 00:11:19.307 11191.532 - 11241.945: 98.9369% ( 4) 00:11:19.307 11241.945 - 11292.357: 98.9637% ( 5) 00:11:19.307 11292.357 - 11342.769: 98.9905% ( 5) 00:11:19.307 11342.769 - 11393.182: 99.0228% ( 6) 00:11:19.307 11393.182 - 11443.594: 99.0335% ( 2) 00:11:19.307 11443.594 - 11494.006: 99.0442% ( 2) 00:11:19.307 11494.006 - 11544.418: 99.0550% ( 2) 00:11:19.307 11544.418 - 11594.831: 99.0657% ( 2) 00:11:19.307 11594.831 - 11645.243: 99.0818% ( 3) 00:11:19.307 11645.243 - 11695.655: 99.0979% ( 3) 00:11:19.307 11695.655 - 11746.068: 99.1033% ( 1) 00:11:19.307 11746.068 - 11796.480: 99.1194% ( 3) 00:11:19.307 11796.480 - 11846.892: 99.1302% ( 2) 00:11:19.307 11846.892 - 11897.305: 99.1463% ( 3) 00:11:19.307 11897.305 - 11947.717: 99.1570% ( 2) 00:11:19.307 11947.717 - 11998.129: 99.1677% ( 2) 00:11:19.307 11998.129 - 12048.542: 99.1838% ( 3) 00:11:19.307 12048.542 - 12098.954: 99.1946% ( 2) 00:11:19.307 12098.954 - 12149.366: 99.2053% ( 2) 00:11:19.307 12149.366 - 12199.778: 99.2214% ( 3) 00:11:19.307 12199.778 - 12250.191: 99.2322% ( 2) 00:11:19.307 12250.191 - 12300.603: 99.2429% ( 2) 00:11:19.307 12300.603 - 12351.015: 99.2590% ( 3) 00:11:19.307 12351.015 - 12401.428: 99.2698% ( 2) 00:11:19.307 12401.428 - 12451.840: 99.2859% ( 3) 00:11:19.307 12451.840 - 12502.252: 99.2966% ( 2) 00:11:19.307 12502.252 - 12552.665: 99.3073% ( 2) 00:11:19.307 12552.665 - 12603.077: 99.3127% ( 1) 00:11:19.307 21878.942 - 21979.766: 99.3288% ( 3) 00:11:19.307 21979.766 - 22080.591: 99.3503% ( 4) 00:11:19.307 22080.591 - 22181.415: 99.3718% ( 4) 00:11:19.307 22181.415 - 22282.240: 99.3933% ( 4) 00:11:19.307 22282.240 - 22383.065: 99.4201% ( 5) 00:11:19.307 22383.065 - 22483.889: 99.4416% ( 4) 00:11:19.307 22483.889 - 22584.714: 99.4631% ( 4) 00:11:19.307 22584.714 - 22685.538: 99.4845% ( 4) 00:11:19.307 22685.538 - 22786.363: 99.5060% ( 4) 00:11:19.307 22786.363 - 22887.188: 99.5275% ( 4) 00:11:19.307 22887.188 - 22988.012: 99.5543% ( 5) 00:11:19.307 22988.012 - 23088.837: 99.5758% ( 4) 00:11:19.307 23088.837 - 23189.662: 99.5973% ( 4) 00:11:19.307 23189.662 - 23290.486: 99.6188% ( 4) 00:11:19.307 23290.486 - 23391.311: 99.6402% ( 4) 00:11:19.307 23391.311 - 23492.135: 99.6564% ( 3) 00:11:19.307 26416.049 - 26617.698: 99.6886% ( 6) 00:11:19.307 26617.698 - 26819.348: 99.7315% ( 8) 00:11:19.307 26819.348 - 27020.997: 99.7745% ( 8) 00:11:19.307 27020.997 - 27222.646: 99.8228% ( 9) 00:11:19.307 27222.646 - 27424.295: 99.8658% ( 8) 00:11:19.307 27424.295 - 27625.945: 99.9141% ( 9) 00:11:19.307 27625.945 - 27827.594: 99.9570% ( 8) 00:11:19.307 27827.594 - 28029.243: 100.0000% ( 8) 00:11:19.307 00:11:19.307 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:19.307 ============================================================================== 00:11:19.307 Range in us Cumulative IO count 00:11:19.307 5595.766 - 5620.972: 0.0483% ( 9) 00:11:19.307 5620.972 - 5646.178: 0.1128% ( 12) 00:11:19.307 5646.178 - 5671.385: 0.1665% ( 10) 00:11:19.307 5671.385 - 5696.591: 0.2094% ( 8) 00:11:19.307 5696.591 - 5721.797: 0.2685% ( 11) 00:11:19.307 5721.797 - 5747.003: 0.4081% ( 26) 00:11:19.307 5747.003 - 5772.209: 0.5477% ( 26) 00:11:19.307 5772.209 - 5797.415: 0.6712% ( 23) 00:11:19.307 5797.415 - 5822.622: 0.8215% ( 28) 00:11:19.307 5822.622 - 5847.828: 0.9235% ( 19) 00:11:19.307 5847.828 - 5873.034: 1.0631% ( 26) 00:11:19.307 5873.034 - 5898.240: 1.2135% ( 28) 00:11:19.307 5898.240 - 5923.446: 1.3262% ( 21) 00:11:19.307 5923.446 - 5948.652: 1.6269% ( 56) 00:11:19.307 5948.652 - 5973.858: 1.9598% ( 62) 00:11:19.307 5973.858 - 5999.065: 2.4807% ( 97) 00:11:19.307 5999.065 - 6024.271: 3.1626% ( 127) 00:11:19.307 6024.271 - 6049.477: 4.1989% ( 193) 00:11:19.307 6049.477 - 6074.683: 5.4285% ( 229) 00:11:19.307 6074.683 - 6099.889: 6.7655% ( 249) 00:11:19.307 6099.889 - 6125.095: 8.3172% ( 289) 00:11:19.307 6125.095 - 6150.302: 10.1482% ( 341) 00:11:19.307 6150.302 - 6175.508: 12.2852% ( 398) 00:11:19.307 6175.508 - 6200.714: 14.5726% ( 426) 00:11:19.307 6200.714 - 6225.920: 16.6130% ( 380) 00:11:19.307 6225.920 - 6251.126: 18.7554% ( 399) 00:11:19.307 6251.126 - 6276.332: 20.9246% ( 404) 00:11:19.307 6276.332 - 6301.538: 23.1314% ( 411) 00:11:19.307 6301.538 - 6326.745: 25.3329% ( 410) 00:11:19.307 6326.745 - 6351.951: 27.9210% ( 482) 00:11:19.307 6351.951 - 6377.157: 30.4607% ( 473) 00:11:19.307 6377.157 - 6402.363: 32.8716% ( 449) 00:11:19.307 6402.363 - 6427.569: 35.2502% ( 443) 00:11:19.307 6427.569 - 6452.775: 37.6342% ( 444) 00:11:19.307 6452.775 - 6503.188: 42.4130% ( 890) 00:11:19.307 6503.188 - 6553.600: 47.0683% ( 867) 00:11:19.307 6553.600 - 6604.012: 51.8632% ( 893) 00:11:19.307 6604.012 - 6654.425: 56.5775% ( 878) 00:11:19.307 6654.425 - 6704.837: 61.3080% ( 881) 00:11:19.307 6704.837 - 6755.249: 66.1029% ( 893) 00:11:19.307 6755.249 - 6805.662: 70.8280% ( 880) 00:11:19.307 6805.662 - 6856.074: 75.0752% ( 791) 00:11:19.307 6856.074 - 6906.486: 78.5599% ( 649) 00:11:19.307 6906.486 - 6956.898: 81.4165% ( 532) 00:11:19.307 6956.898 - 7007.311: 83.3870% ( 367) 00:11:19.307 7007.311 - 7057.723: 84.8582% ( 274) 00:11:19.307 7057.723 - 7108.135: 86.1738% ( 245) 00:11:19.307 7108.135 - 7158.548: 87.1510% ( 182) 00:11:19.307 7158.548 - 7208.960: 87.8275% ( 126) 00:11:19.307 7208.960 - 7259.372: 88.4021% ( 107) 00:11:19.307 7259.372 - 7309.785: 88.9497% ( 102) 00:11:19.307 7309.785 - 7360.197: 89.4330% ( 90) 00:11:19.307 7360.197 - 7410.609: 89.8572% ( 79) 00:11:19.307 7410.609 - 7461.022: 90.2814% ( 79) 00:11:19.307 7461.022 - 7511.434: 90.6250% ( 64) 00:11:19.307 7511.434 - 7561.846: 90.9042% ( 52) 00:11:19.307 7561.846 - 7612.258: 91.1942% ( 54) 00:11:19.307 7612.258 - 7662.671: 91.4358% ( 45) 00:11:19.307 7662.671 - 7713.083: 91.6881% ( 47) 00:11:19.307 7713.083 - 7763.495: 91.9190% ( 43) 00:11:19.307 7763.495 - 7813.908: 92.1660% ( 46) 00:11:19.307 7813.908 - 7864.320: 92.3647% ( 37) 00:11:19.307 7864.320 - 7914.732: 92.6009% ( 44) 00:11:19.307 7914.732 - 7965.145: 92.8211% ( 41) 00:11:19.307 7965.145 - 8015.557: 92.9553% ( 25) 00:11:19.307 8015.557 - 8065.969: 93.0681% ( 21) 00:11:19.307 8065.969 - 8116.382: 93.1970% ( 24) 00:11:19.307 8116.382 - 8166.794: 93.3634% ( 31) 00:11:19.307 8166.794 - 8217.206: 93.5460% ( 34) 00:11:19.307 8217.206 - 8267.618: 93.7715% ( 42) 00:11:19.307 8267.618 - 8318.031: 93.9916% ( 41) 00:11:19.307 8318.031 - 8368.443: 94.1903% ( 37) 00:11:19.307 8368.443 - 8418.855: 94.3943% ( 38) 00:11:19.307 8418.855 - 8469.268: 94.5876% ( 36) 00:11:19.307 8469.268 - 8519.680: 94.8024% ( 40) 00:11:19.307 8519.680 - 8570.092: 95.0011% ( 37) 00:11:19.307 8570.092 - 8620.505: 95.1997% ( 37) 00:11:19.307 8620.505 - 8670.917: 95.3877% ( 35) 00:11:19.307 8670.917 - 8721.329: 95.5541% ( 31) 00:11:19.307 8721.329 - 8771.742: 95.7474% ( 36) 00:11:19.308 8771.742 - 8822.154: 95.9192% ( 32) 00:11:19.308 8822.154 - 8872.566: 96.0696% ( 28) 00:11:19.308 8872.566 - 8922.978: 96.2253% ( 29) 00:11:19.308 8922.978 - 8973.391: 96.4132% ( 35) 00:11:19.308 8973.391 - 9023.803: 96.5958% ( 34) 00:11:19.308 9023.803 - 9074.215: 96.7354% ( 26) 00:11:19.308 9074.215 - 9124.628: 96.8482% ( 21) 00:11:19.308 9124.628 - 9175.040: 96.9394% ( 17) 00:11:19.308 9175.040 - 9225.452: 97.0253% ( 16) 00:11:19.308 9225.452 - 9275.865: 97.1166% ( 17) 00:11:19.308 9275.865 - 9326.277: 97.2294% ( 21) 00:11:19.308 9326.277 - 9376.689: 97.3153% ( 16) 00:11:19.308 9376.689 - 9427.102: 97.4173% ( 19) 00:11:19.308 9427.102 - 9477.514: 97.4925% ( 14) 00:11:19.308 9477.514 - 9527.926: 97.5838% ( 17) 00:11:19.308 9527.926 - 9578.338: 97.6589% ( 14) 00:11:19.308 9578.338 - 9628.751: 97.7771% ( 22) 00:11:19.308 9628.751 - 9679.163: 97.8308% ( 10) 00:11:19.308 9679.163 - 9729.575: 97.8898% ( 11) 00:11:19.308 9729.575 - 9779.988: 97.9328% ( 8) 00:11:19.308 9779.988 - 9830.400: 97.9543% ( 4) 00:11:19.308 9830.400 - 9880.812: 97.9757% ( 4) 00:11:19.308 9880.812 - 9931.225: 98.0241% ( 9) 00:11:19.308 9931.225 - 9981.637: 98.0724% ( 9) 00:11:19.308 9981.637 - 10032.049: 98.1153% ( 8) 00:11:19.308 10032.049 - 10082.462: 98.1690% ( 10) 00:11:19.308 10082.462 - 10132.874: 98.1959% ( 5) 00:11:19.308 10132.874 - 10183.286: 98.2281% ( 6) 00:11:19.308 10183.286 - 10233.698: 98.2657% ( 7) 00:11:19.308 10233.698 - 10284.111: 98.2979% ( 6) 00:11:19.308 10284.111 - 10334.523: 98.3247% ( 5) 00:11:19.308 10334.523 - 10384.935: 98.3623% ( 7) 00:11:19.308 10384.935 - 10435.348: 98.3945% ( 6) 00:11:19.308 10435.348 - 10485.760: 98.4268% ( 6) 00:11:19.308 10485.760 - 10536.172: 98.4643% ( 7) 00:11:19.308 10536.172 - 10586.585: 98.4912% ( 5) 00:11:19.308 10586.585 - 10636.997: 98.5234% ( 6) 00:11:19.308 10636.997 - 10687.409: 98.5610% ( 7) 00:11:19.308 10687.409 - 10737.822: 98.6201% ( 11) 00:11:19.308 10737.822 - 10788.234: 98.6523% ( 6) 00:11:19.308 10788.234 - 10838.646: 98.6899% ( 7) 00:11:19.308 10838.646 - 10889.058: 98.7167% ( 5) 00:11:19.308 10889.058 - 10939.471: 98.7328% ( 3) 00:11:19.308 10939.471 - 10989.883: 98.7543% ( 4) 00:11:19.308 10989.883 - 11040.295: 98.7758% ( 4) 00:11:19.308 11040.295 - 11090.708: 98.7973% ( 4) 00:11:19.308 11090.708 - 11141.120: 98.8134% ( 3) 00:11:19.308 11141.120 - 11191.532: 98.8348% ( 4) 00:11:19.308 11191.532 - 11241.945: 98.8563% ( 4) 00:11:19.308 11241.945 - 11292.357: 98.8778% ( 4) 00:11:19.308 11292.357 - 11342.769: 98.8939% ( 3) 00:11:19.308 11342.769 - 11393.182: 98.9154% ( 4) 00:11:19.308 11393.182 - 11443.594: 98.9315% ( 3) 00:11:19.308 11443.594 - 11494.006: 98.9476% ( 3) 00:11:19.308 11494.006 - 11544.418: 98.9691% ( 4) 00:11:19.308 11746.068 - 11796.480: 98.9905% ( 4) 00:11:19.308 11796.480 - 11846.892: 98.9959% ( 1) 00:11:19.308 11846.892 - 11897.305: 99.0120% ( 3) 00:11:19.308 11897.305 - 11947.717: 99.0228% ( 2) 00:11:19.308 11947.717 - 11998.129: 99.0335% ( 2) 00:11:19.308 11998.129 - 12048.542: 99.0496% ( 3) 00:11:19.308 12048.542 - 12098.954: 99.0604% ( 2) 00:11:19.308 12098.954 - 12149.366: 99.0711% ( 2) 00:11:19.308 12149.366 - 12199.778: 99.0818% ( 2) 00:11:19.308 12199.778 - 12250.191: 99.0926% ( 2) 00:11:19.308 12250.191 - 12300.603: 99.1087% ( 3) 00:11:19.308 12300.603 - 12351.015: 99.1194% ( 2) 00:11:19.308 12351.015 - 12401.428: 99.1355% ( 3) 00:11:19.308 12401.428 - 12451.840: 99.1463% ( 2) 00:11:19.308 12451.840 - 12502.252: 99.1570% ( 2) 00:11:19.308 12502.252 - 12552.665: 99.1731% ( 3) 00:11:19.308 12552.665 - 12603.077: 99.1838% ( 2) 00:11:19.308 12603.077 - 12653.489: 99.1946% ( 2) 00:11:19.308 12653.489 - 12703.902: 99.2107% ( 3) 00:11:19.308 12703.902 - 12754.314: 99.2214% ( 2) 00:11:19.308 12754.314 - 12804.726: 99.2375% ( 3) 00:11:19.308 12804.726 - 12855.138: 99.2483% ( 2) 00:11:19.308 12855.138 - 12905.551: 99.2590% ( 2) 00:11:19.308 12905.551 - 13006.375: 99.2859% ( 5) 00:11:19.308 13006.375 - 13107.200: 99.3020% ( 3) 00:11:19.308 13107.200 - 13208.025: 99.3127% ( 2) 00:11:19.308 19963.274 - 20064.098: 99.3181% ( 1) 00:11:19.308 20064.098 - 20164.923: 99.3342% ( 3) 00:11:19.308 20164.923 - 20265.748: 99.3610% ( 5) 00:11:19.308 20265.748 - 20366.572: 99.3825% ( 4) 00:11:19.308 20366.572 - 20467.397: 99.4040% ( 4) 00:11:19.308 20467.397 - 20568.222: 99.4255% ( 4) 00:11:19.308 20568.222 - 20669.046: 99.4470% ( 4) 00:11:19.308 20669.046 - 20769.871: 99.4684% ( 4) 00:11:19.308 20769.871 - 20870.695: 99.4953% ( 5) 00:11:19.308 20870.695 - 20971.520: 99.5168% ( 4) 00:11:19.308 20971.520 - 21072.345: 99.5382% ( 4) 00:11:19.308 21072.345 - 21173.169: 99.5597% ( 4) 00:11:19.308 21173.169 - 21273.994: 99.5812% ( 4) 00:11:19.308 21273.994 - 21374.818: 99.6027% ( 4) 00:11:19.308 21374.818 - 21475.643: 99.6241% ( 4) 00:11:19.308 21475.643 - 21576.468: 99.6456% ( 4) 00:11:19.308 21576.468 - 21677.292: 99.6564% ( 2) 00:11:19.308 24601.206 - 24702.031: 99.6617% ( 1) 00:11:19.308 24702.031 - 24802.855: 99.6832% ( 4) 00:11:19.308 24802.855 - 24903.680: 99.7047% ( 4) 00:11:19.308 24903.680 - 25004.505: 99.7262% ( 4) 00:11:19.308 25004.505 - 25105.329: 99.7476% ( 4) 00:11:19.308 25105.329 - 25206.154: 99.7691% ( 4) 00:11:19.308 25206.154 - 25306.978: 99.7960% ( 5) 00:11:19.308 25306.978 - 25407.803: 99.8121% ( 3) 00:11:19.308 25407.803 - 25508.628: 99.8389% ( 5) 00:11:19.308 25508.628 - 25609.452: 99.8604% ( 4) 00:11:19.308 25609.452 - 25710.277: 99.8819% ( 4) 00:11:19.308 25710.277 - 25811.102: 99.9034% ( 4) 00:11:19.308 25811.102 - 26012.751: 99.9463% ( 8) 00:11:19.308 26012.751 - 26214.400: 99.9946% ( 9) 00:11:19.308 26214.400 - 26416.049: 100.0000% ( 1) 00:11:19.308 00:11:19.308 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:19.308 ============================================================================== 00:11:19.308 Range in us Cumulative IO count 00:11:19.308 5570.560 - 5595.766: 0.0321% ( 6) 00:11:19.308 5595.766 - 5620.972: 0.0642% ( 6) 00:11:19.308 5620.972 - 5646.178: 0.1284% ( 12) 00:11:19.308 5646.178 - 5671.385: 0.2140% ( 16) 00:11:19.308 5671.385 - 5696.591: 0.2890% ( 14) 00:11:19.308 5696.591 - 5721.797: 0.4067% ( 22) 00:11:19.308 5721.797 - 5747.003: 0.5030% ( 18) 00:11:19.308 5747.003 - 5772.209: 0.5779% ( 14) 00:11:19.308 5772.209 - 5797.415: 0.6582% ( 15) 00:11:19.308 5797.415 - 5822.622: 0.7598% ( 19) 00:11:19.308 5822.622 - 5847.828: 0.8508% ( 17) 00:11:19.308 5847.828 - 5873.034: 0.9364% ( 16) 00:11:19.308 5873.034 - 5898.240: 1.1398% ( 38) 00:11:19.308 5898.240 - 5923.446: 1.4180% ( 52) 00:11:19.308 5923.446 - 5948.652: 1.7872% ( 69) 00:11:19.308 5948.652 - 5973.858: 2.1939% ( 76) 00:11:19.308 5973.858 - 5999.065: 2.7023% ( 95) 00:11:19.308 5999.065 - 6024.271: 3.4835% ( 146) 00:11:19.308 6024.271 - 6049.477: 4.2755% ( 148) 00:11:19.308 6049.477 - 6074.683: 5.3724% ( 205) 00:11:19.308 6074.683 - 6099.889: 6.8868% ( 283) 00:11:19.308 6099.889 - 6125.095: 8.6580% ( 331) 00:11:19.308 6125.095 - 6150.302: 10.3275% ( 312) 00:11:19.308 6150.302 - 6175.508: 12.4037% ( 388) 00:11:19.308 6175.508 - 6200.714: 14.5013% ( 392) 00:11:19.308 6200.714 - 6225.920: 16.6310% ( 398) 00:11:19.308 6225.920 - 6251.126: 18.7286% ( 392) 00:11:19.308 6251.126 - 6276.332: 20.9707% ( 419) 00:11:19.308 6276.332 - 6301.538: 23.0629% ( 391) 00:11:19.308 6301.538 - 6326.745: 25.3960% ( 436) 00:11:19.308 6326.745 - 6351.951: 27.8949% ( 467) 00:11:19.308 6351.951 - 6377.157: 30.1958% ( 430) 00:11:19.308 6377.157 - 6402.363: 32.4005% ( 412) 00:11:19.308 6402.363 - 6427.569: 34.7817% ( 445) 00:11:19.308 6427.569 - 6452.775: 37.2271% ( 457) 00:11:19.308 6452.775 - 6503.188: 41.8718% ( 868) 00:11:19.308 6503.188 - 6553.600: 46.5593% ( 876) 00:11:19.308 6553.600 - 6604.012: 51.2949% ( 885) 00:11:19.308 6604.012 - 6654.425: 56.1483% ( 907) 00:11:19.308 6654.425 - 6704.837: 60.8893% ( 886) 00:11:19.308 6704.837 - 6755.249: 65.5287% ( 867) 00:11:19.308 6755.249 - 6805.662: 70.1573% ( 865) 00:11:19.308 6805.662 - 6856.074: 74.3900% ( 791) 00:11:19.308 6856.074 - 6906.486: 77.9431% ( 664) 00:11:19.308 6906.486 - 6956.898: 80.8433% ( 542) 00:11:19.308 6956.898 - 7007.311: 83.0105% ( 405) 00:11:19.308 7007.311 - 7057.723: 84.6372% ( 304) 00:11:19.308 7057.723 - 7108.135: 86.0017% ( 255) 00:11:19.308 7108.135 - 7158.548: 86.9114% ( 170) 00:11:19.308 7158.548 - 7208.960: 87.6338% ( 135) 00:11:19.308 7208.960 - 7259.372: 88.2545% ( 116) 00:11:19.308 7259.372 - 7309.785: 88.8645% ( 114) 00:11:19.308 7309.785 - 7360.197: 89.4050% ( 101) 00:11:19.308 7360.197 - 7410.609: 89.8812% ( 89) 00:11:19.308 7410.609 - 7461.022: 90.3146% ( 81) 00:11:19.308 7461.022 - 7511.434: 90.6732% ( 67) 00:11:19.308 7511.434 - 7561.846: 91.0263% ( 66) 00:11:19.308 7561.846 - 7612.258: 91.3688% ( 64) 00:11:19.308 7612.258 - 7662.671: 91.6685% ( 56) 00:11:19.308 7662.671 - 7713.083: 91.8878% ( 41) 00:11:19.308 7713.083 - 7763.495: 92.1393% ( 47) 00:11:19.308 7763.495 - 7813.908: 92.3801% ( 45) 00:11:19.308 7813.908 - 7864.320: 92.6477% ( 50) 00:11:19.308 7864.320 - 7914.732: 92.8617% ( 40) 00:11:19.308 7914.732 - 7965.145: 93.0544% ( 36) 00:11:19.309 7965.145 - 8015.557: 93.2684% ( 40) 00:11:19.309 8015.557 - 8065.969: 93.4503% ( 34) 00:11:19.309 8065.969 - 8116.382: 93.6162% ( 31) 00:11:19.309 8116.382 - 8166.794: 93.7714% ( 29) 00:11:19.309 8166.794 - 8217.206: 93.9266% ( 29) 00:11:19.309 8217.206 - 8267.618: 94.0871% ( 30) 00:11:19.309 8267.618 - 8318.031: 94.2155% ( 24) 00:11:19.309 8318.031 - 8368.443: 94.4456% ( 43) 00:11:19.309 8368.443 - 8418.855: 94.6169% ( 32) 00:11:19.309 8418.855 - 8469.268: 94.8202% ( 38) 00:11:19.309 8469.268 - 8519.680: 95.0021% ( 34) 00:11:19.309 8519.680 - 8570.092: 95.1787% ( 33) 00:11:19.309 8570.092 - 8620.505: 95.3339% ( 29) 00:11:19.309 8620.505 - 8670.917: 95.4730% ( 26) 00:11:19.309 8670.917 - 8721.329: 95.6015% ( 24) 00:11:19.309 8721.329 - 8771.742: 95.7352% ( 25) 00:11:19.309 8771.742 - 8822.154: 95.8637% ( 24) 00:11:19.309 8822.154 - 8872.566: 95.9974% ( 25) 00:11:19.309 8872.566 - 8922.978: 96.1312% ( 25) 00:11:19.309 8922.978 - 8973.391: 96.2596% ( 24) 00:11:19.309 8973.391 - 9023.803: 96.4202% ( 30) 00:11:19.309 9023.803 - 9074.215: 96.5432% ( 23) 00:11:19.309 9074.215 - 9124.628: 96.6877% ( 27) 00:11:19.309 9124.628 - 9175.040: 96.8429% ( 29) 00:11:19.309 9175.040 - 9225.452: 96.9660% ( 23) 00:11:19.309 9225.452 - 9275.865: 97.0569% ( 17) 00:11:19.309 9275.865 - 9326.277: 97.1479% ( 17) 00:11:19.309 9326.277 - 9376.689: 97.2442% ( 18) 00:11:19.309 9376.689 - 9427.102: 97.3298% ( 16) 00:11:19.309 9427.102 - 9477.514: 97.4155% ( 16) 00:11:19.309 9477.514 - 9527.926: 97.4957% ( 15) 00:11:19.309 9527.926 - 9578.338: 97.5599% ( 12) 00:11:19.309 9578.338 - 9628.751: 97.6295% ( 13) 00:11:19.309 9628.751 - 9679.163: 97.7205% ( 17) 00:11:19.309 9679.163 - 9729.575: 97.8007% ( 15) 00:11:19.309 9729.575 - 9779.988: 97.8703% ( 13) 00:11:19.309 9779.988 - 9830.400: 97.9506% ( 15) 00:11:19.309 9830.400 - 9880.812: 98.0469% ( 18) 00:11:19.309 9880.812 - 9931.225: 98.1271% ( 15) 00:11:19.309 9931.225 - 9981.637: 98.1860% ( 11) 00:11:19.309 9981.637 - 10032.049: 98.2288% ( 8) 00:11:19.309 10032.049 - 10082.462: 98.2770% ( 9) 00:11:19.309 10082.462 - 10132.874: 98.3198% ( 8) 00:11:19.309 10132.874 - 10183.286: 98.3626% ( 8) 00:11:19.309 10183.286 - 10233.698: 98.4054% ( 8) 00:11:19.309 10233.698 - 10284.111: 98.4536% ( 9) 00:11:19.309 10284.111 - 10334.523: 98.4964% ( 8) 00:11:19.309 10334.523 - 10384.935: 98.5338% ( 7) 00:11:19.309 10384.935 - 10435.348: 98.5713% ( 7) 00:11:19.309 10435.348 - 10485.760: 98.5980% ( 5) 00:11:19.309 10485.760 - 10536.172: 98.6087% ( 2) 00:11:19.309 10536.172 - 10586.585: 98.6248% ( 3) 00:11:19.309 10586.585 - 10636.997: 98.6301% ( 1) 00:11:19.309 11090.708 - 11141.120: 98.6462% ( 3) 00:11:19.309 11141.120 - 11191.532: 98.6676% ( 4) 00:11:19.309 11191.532 - 11241.945: 98.6836% ( 3) 00:11:19.309 11241.945 - 11292.357: 98.7051% ( 4) 00:11:19.309 11292.357 - 11342.769: 98.7265% ( 4) 00:11:19.309 11342.769 - 11393.182: 98.7479% ( 4) 00:11:19.309 11393.182 - 11443.594: 98.7693% ( 4) 00:11:19.309 11443.594 - 11494.006: 98.7853% ( 3) 00:11:19.309 11494.006 - 11544.418: 98.8067% ( 4) 00:11:19.309 11544.418 - 11594.831: 98.8281% ( 4) 00:11:19.309 11594.831 - 11645.243: 98.8602% ( 6) 00:11:19.309 11645.243 - 11695.655: 98.9084% ( 9) 00:11:19.309 11695.655 - 11746.068: 98.9298% ( 4) 00:11:19.309 11746.068 - 11796.480: 98.9565% ( 5) 00:11:19.309 11796.480 - 11846.892: 98.9833% ( 5) 00:11:19.309 11846.892 - 11897.305: 99.0101% ( 5) 00:11:19.309 11897.305 - 11947.717: 99.0368% ( 5) 00:11:19.309 11947.717 - 11998.129: 99.0475% ( 2) 00:11:19.309 11998.129 - 12048.542: 99.0582% ( 2) 00:11:19.309 12048.542 - 12098.954: 99.0636% ( 1) 00:11:19.309 12098.954 - 12149.366: 99.0743% ( 2) 00:11:19.309 12149.366 - 12199.778: 99.0850% ( 2) 00:11:19.309 12199.778 - 12250.191: 99.0903% ( 1) 00:11:19.309 12250.191 - 12300.603: 99.1010% ( 2) 00:11:19.309 12300.603 - 12351.015: 99.1117% ( 2) 00:11:19.309 12351.015 - 12401.428: 99.1224% ( 2) 00:11:19.309 12401.428 - 12451.840: 99.1278% ( 1) 00:11:19.309 12451.840 - 12502.252: 99.1385% ( 2) 00:11:19.309 12502.252 - 12552.665: 99.1545% ( 3) 00:11:19.309 12603.077 - 12653.489: 99.1706% ( 3) 00:11:19.309 12653.489 - 12703.902: 99.1813% ( 2) 00:11:19.309 12703.902 - 12754.314: 99.1920% ( 2) 00:11:19.309 12754.314 - 12804.726: 99.2080% ( 3) 00:11:19.309 12804.726 - 12855.138: 99.2188% ( 2) 00:11:19.309 12855.138 - 12905.551: 99.2295% ( 2) 00:11:19.309 12905.551 - 13006.375: 99.2562% ( 5) 00:11:19.309 13006.375 - 13107.200: 99.2830% ( 5) 00:11:19.309 13107.200 - 13208.025: 99.3044% ( 4) 00:11:19.309 13208.025 - 13308.849: 99.3151% ( 2) 00:11:19.309 14619.569 - 14720.394: 99.3204% ( 1) 00:11:19.309 14720.394 - 14821.218: 99.3365% ( 3) 00:11:19.309 14821.218 - 14922.043: 99.3632% ( 5) 00:11:19.309 14922.043 - 15022.868: 99.3793% ( 3) 00:11:19.309 15022.868 - 15123.692: 99.3953% ( 3) 00:11:19.309 15123.692 - 15224.517: 99.4167% ( 4) 00:11:19.309 15224.517 - 15325.342: 99.4435% ( 5) 00:11:19.309 15325.342 - 15426.166: 99.4649% ( 4) 00:11:19.309 15426.166 - 15526.991: 99.4863% ( 4) 00:11:19.309 15526.991 - 15627.815: 99.5077% ( 4) 00:11:19.309 15627.815 - 15728.640: 99.5291% ( 4) 00:11:19.309 15728.640 - 15829.465: 99.5505% ( 4) 00:11:19.309 15829.465 - 15930.289: 99.5719% ( 4) 00:11:19.309 15930.289 - 16031.114: 99.5933% ( 4) 00:11:19.309 16031.114 - 16131.938: 99.6201% ( 5) 00:11:19.309 16131.938 - 16232.763: 99.6415% ( 4) 00:11:19.309 16232.763 - 16333.588: 99.6575% ( 3) 00:11:19.309 19358.326 - 19459.151: 99.6629% ( 1) 00:11:19.309 19459.151 - 19559.975: 99.6843% ( 4) 00:11:19.309 19559.975 - 19660.800: 99.7057% ( 4) 00:11:19.309 19660.800 - 19761.625: 99.7271% ( 4) 00:11:19.309 19761.625 - 19862.449: 99.7539% ( 5) 00:11:19.309 19862.449 - 19963.274: 99.7753% ( 4) 00:11:19.309 19963.274 - 20064.098: 99.7967% ( 4) 00:11:19.309 20064.098 - 20164.923: 99.8181% ( 4) 00:11:19.309 20164.923 - 20265.748: 99.8395% ( 4) 00:11:19.309 20265.748 - 20366.572: 99.8662% ( 5) 00:11:19.309 20366.572 - 20467.397: 99.8876% ( 4) 00:11:19.309 20467.397 - 20568.222: 99.9090% ( 4) 00:11:19.309 20568.222 - 20669.046: 99.9304% ( 4) 00:11:19.309 20669.046 - 20769.871: 99.9572% ( 5) 00:11:19.309 20769.871 - 20870.695: 99.9786% ( 4) 00:11:19.309 20870.695 - 20971.520: 100.0000% ( 4) 00:11:19.309 00:11:19.309 19:29:38 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:11:20.712 Initializing NVMe Controllers 00:11:20.712 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:20.712 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:20.712 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:20.712 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:20.712 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:11:20.712 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:11:20.712 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:11:20.712 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:11:20.712 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:11:20.712 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:11:20.712 Initialization complete. Launching workers. 00:11:20.712 ======================================================== 00:11:20.712 Latency(us) 00:11:20.712 Device Information : IOPS MiB/s Average min max 00:11:20.712 PCIE (0000:00:10.0) NSID 1 from core 0: 17045.18 199.75 7519.49 5689.62 32287.06 00:11:20.712 PCIE (0000:00:11.0) NSID 1 from core 0: 17045.18 199.75 7507.97 5811.21 30503.01 00:11:20.712 PCIE (0000:00:13.0) NSID 1 from core 0: 17045.18 199.75 7496.20 5828.80 28780.24 00:11:20.712 PCIE (0000:00:12.0) NSID 1 from core 0: 17045.18 199.75 7484.51 5791.85 27100.54 00:11:20.712 PCIE (0000:00:12.0) NSID 2 from core 0: 17045.18 199.75 7472.73 5926.97 25323.32 00:11:20.712 PCIE (0000:00:12.0) NSID 3 from core 0: 17109.02 200.50 7433.27 5830.42 20144.13 00:11:20.712 ======================================================== 00:11:20.712 Total : 102334.93 1199.24 7485.66 5689.62 32287.06 00:11:20.712 00:11:20.712 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:20.712 ================================================================================= 00:11:20.712 1.00000% : 6074.683us 00:11:20.712 10.00000% : 6503.188us 00:11:20.712 25.00000% : 6755.249us 00:11:20.712 50.00000% : 7057.723us 00:11:20.712 75.00000% : 7561.846us 00:11:20.712 90.00000% : 8721.329us 00:11:20.712 95.00000% : 10284.111us 00:11:20.712 98.00000% : 12149.366us 00:11:20.712 99.00000% : 13006.375us 00:11:20.712 99.50000% : 27020.997us 00:11:20.712 99.90000% : 31860.578us 00:11:20.712 99.99000% : 32263.877us 00:11:20.712 99.99900% : 32465.526us 00:11:20.712 99.99990% : 32465.526us 00:11:20.712 99.99999% : 32465.526us 00:11:20.712 00:11:20.712 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:20.712 ================================================================================= 00:11:20.712 1.00000% : 6175.508us 00:11:20.712 10.00000% : 6553.600us 00:11:20.712 25.00000% : 6755.249us 00:11:20.712 50.00000% : 7007.311us 00:11:20.712 75.00000% : 7561.846us 00:11:20.712 90.00000% : 8721.329us 00:11:20.712 95.00000% : 10435.348us 00:11:20.712 98.00000% : 12300.603us 00:11:20.712 99.00000% : 13611.323us 00:11:20.712 99.50000% : 25407.803us 00:11:20.712 99.90000% : 30247.385us 00:11:20.712 99.99000% : 30650.683us 00:11:20.712 99.99900% : 30650.683us 00:11:20.712 99.99990% : 30650.683us 00:11:20.712 99.99999% : 30650.683us 00:11:20.712 00:11:20.712 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:20.712 ================================================================================= 00:11:20.712 1.00000% : 6175.508us 00:11:20.712 10.00000% : 6553.600us 00:11:20.712 25.00000% : 6805.662us 00:11:20.712 50.00000% : 7007.311us 00:11:20.712 75.00000% : 7511.434us 00:11:20.712 90.00000% : 8771.742us 00:11:20.712 95.00000% : 10284.111us 00:11:20.712 98.00000% : 12300.603us 00:11:20.712 99.00000% : 13611.323us 00:11:20.712 99.50000% : 23794.609us 00:11:20.712 99.90000% : 28432.542us 00:11:20.712 99.99000% : 28835.840us 00:11:20.712 99.99900% : 28835.840us 00:11:20.712 99.99990% : 28835.840us 00:11:20.712 99.99999% : 28835.840us 00:11:20.712 00:11:20.712 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:20.712 ================================================================================= 00:11:20.712 1.00000% : 6200.714us 00:11:20.712 10.00000% : 6553.600us 00:11:20.712 25.00000% : 6805.662us 00:11:20.712 50.00000% : 7007.311us 00:11:20.712 75.00000% : 7511.434us 00:11:20.712 90.00000% : 8771.742us 00:11:20.712 95.00000% : 10183.286us 00:11:20.712 98.00000% : 12250.191us 00:11:20.712 99.00000% : 13510.498us 00:11:20.712 99.50000% : 22080.591us 00:11:20.712 99.90000% : 26819.348us 00:11:20.712 99.99000% : 27222.646us 00:11:20.712 99.99900% : 27222.646us 00:11:20.712 99.99990% : 27222.646us 00:11:20.712 99.99999% : 27222.646us 00:11:20.712 00:11:20.712 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:20.712 ================================================================================= 00:11:20.712 1.00000% : 6200.714us 00:11:20.712 10.00000% : 6553.600us 00:11:20.712 25.00000% : 6805.662us 00:11:20.712 50.00000% : 7057.723us 00:11:20.712 75.00000% : 7511.434us 00:11:20.712 90.00000% : 8771.742us 00:11:20.712 95.00000% : 10082.462us 00:11:20.712 98.00000% : 11897.305us 00:11:20.712 99.00000% : 12855.138us 00:11:20.712 99.50000% : 20265.748us 00:11:20.712 99.90000% : 24903.680us 00:11:20.712 99.99000% : 25306.978us 00:11:20.712 99.99900% : 25407.803us 00:11:20.712 99.99990% : 25407.803us 00:11:20.712 99.99999% : 25407.803us 00:11:20.712 00:11:20.712 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:20.712 ================================================================================= 00:11:20.712 1.00000% : 6175.508us 00:11:20.712 10.00000% : 6553.600us 00:11:20.712 25.00000% : 6805.662us 00:11:20.712 50.00000% : 7057.723us 00:11:20.712 75.00000% : 7561.846us 00:11:20.712 90.00000% : 8771.742us 00:11:20.712 95.00000% : 10032.049us 00:11:20.712 98.00000% : 11998.129us 00:11:20.712 99.00000% : 12552.665us 00:11:20.712 99.50000% : 14821.218us 00:11:20.712 99.90000% : 19761.625us 00:11:20.712 99.99000% : 20164.923us 00:11:20.712 99.99900% : 20164.923us 00:11:20.712 99.99990% : 20164.923us 00:11:20.712 99.99999% : 20164.923us 00:11:20.712 00:11:20.712 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:11:20.712 ============================================================================== 00:11:20.712 Range in us Cumulative IO count 00:11:20.712 5671.385 - 5696.591: 0.0059% ( 1) 00:11:20.712 5696.591 - 5721.797: 0.0176% ( 2) 00:11:20.712 5721.797 - 5747.003: 0.0293% ( 2) 00:11:20.712 5747.003 - 5772.209: 0.0351% ( 1) 00:11:20.712 5772.209 - 5797.415: 0.0468% ( 2) 00:11:20.712 5797.415 - 5822.622: 0.0761% ( 5) 00:11:20.712 5822.622 - 5847.828: 0.1522% ( 13) 00:11:20.712 5847.828 - 5873.034: 0.1873% ( 6) 00:11:20.712 5873.034 - 5898.240: 0.2282% ( 7) 00:11:20.712 5898.240 - 5923.446: 0.2985% ( 12) 00:11:20.712 5923.446 - 5948.652: 0.3804% ( 14) 00:11:20.712 5948.652 - 5973.858: 0.5091% ( 22) 00:11:20.712 5973.858 - 5999.065: 0.6613% ( 26) 00:11:20.712 5999.065 - 6024.271: 0.7666% ( 18) 00:11:20.712 6024.271 - 6049.477: 0.8778% ( 19) 00:11:20.712 6049.477 - 6074.683: 1.0417% ( 28) 00:11:20.712 6074.683 - 6099.889: 1.2114% ( 29) 00:11:20.712 6099.889 - 6125.095: 1.4045% ( 33) 00:11:20.712 6125.095 - 6150.302: 1.6035% ( 34) 00:11:20.712 6150.302 - 6175.508: 1.8434% ( 41) 00:11:20.712 6175.508 - 6200.714: 2.1126% ( 46) 00:11:20.712 6200.714 - 6225.920: 2.5222% ( 70) 00:11:20.712 6225.920 - 6251.126: 3.0138% ( 84) 00:11:20.712 6251.126 - 6276.332: 3.5346% ( 89) 00:11:20.712 6276.332 - 6301.538: 4.0321% ( 85) 00:11:20.712 6301.538 - 6326.745: 4.7870% ( 129) 00:11:20.712 6326.745 - 6351.951: 5.4541% ( 114) 00:11:20.712 6351.951 - 6377.157: 6.1622% ( 121) 00:11:20.712 6377.157 - 6402.363: 7.0459% ( 151) 00:11:20.712 6402.363 - 6427.569: 7.8593% ( 139) 00:11:20.712 6427.569 - 6452.775: 8.7956% ( 160) 00:11:20.712 6452.775 - 6503.188: 10.8907% ( 358) 00:11:20.712 6503.188 - 6553.600: 13.5534% ( 455) 00:11:20.712 6553.600 - 6604.012: 17.1231% ( 610) 00:11:20.712 6604.012 - 6654.425: 20.7807% ( 625) 00:11:20.712 6654.425 - 6704.837: 24.6723% ( 665) 00:11:20.712 6704.837 - 6755.249: 28.6166% ( 674) 00:11:20.712 6755.249 - 6805.662: 32.9647% ( 743) 00:11:20.712 6805.662 - 6856.074: 37.3478% ( 749) 00:11:20.712 6856.074 - 6906.486: 41.4501% ( 701) 00:11:20.712 6906.486 - 6956.898: 45.2774% ( 654) 00:11:20.712 6956.898 - 7007.311: 48.7067% ( 586) 00:11:20.712 7007.311 - 7057.723: 51.9136% ( 548) 00:11:20.712 7057.723 - 7108.135: 55.4951% ( 612) 00:11:20.712 7108.135 - 7158.548: 59.1760% ( 629) 00:11:20.712 7158.548 - 7208.960: 62.6346% ( 591) 00:11:20.712 7208.960 - 7259.372: 65.1217% ( 425) 00:11:20.712 7259.372 - 7309.785: 67.4567% ( 399) 00:11:20.712 7309.785 - 7360.197: 69.4230% ( 336) 00:11:20.712 7360.197 - 7410.609: 71.2956% ( 320) 00:11:20.713 7410.609 - 7461.022: 72.6533% ( 232) 00:11:20.713 7461.022 - 7511.434: 73.9466% ( 221) 00:11:20.713 7511.434 - 7561.846: 75.3570% ( 241) 00:11:20.713 7561.846 - 7612.258: 76.4279% ( 183) 00:11:20.713 7612.258 - 7662.671: 77.2706% ( 144) 00:11:20.713 7662.671 - 7713.083: 78.5581% ( 220) 00:11:20.713 7713.083 - 7763.495: 79.7694% ( 207) 00:11:20.713 7763.495 - 7813.908: 80.8052% ( 177) 00:11:20.713 7813.908 - 7864.320: 81.6831% ( 150) 00:11:20.713 7864.320 - 7914.732: 82.5726% ( 152) 00:11:20.713 7914.732 - 7965.145: 83.2982% ( 124) 00:11:20.713 7965.145 - 8015.557: 83.9302% ( 108) 00:11:20.713 8015.557 - 8065.969: 84.5096% ( 99) 00:11:20.713 8065.969 - 8116.382: 85.1767% ( 114) 00:11:20.713 8116.382 - 8166.794: 85.7268% ( 94) 00:11:20.713 8166.794 - 8217.206: 86.1306% ( 69) 00:11:20.713 8217.206 - 8267.618: 86.4934% ( 62) 00:11:20.713 8267.618 - 8318.031: 86.9148% ( 72) 00:11:20.713 8318.031 - 8368.443: 87.4941% ( 99) 00:11:20.713 8368.443 - 8418.855: 87.8570% ( 62) 00:11:20.713 8418.855 - 8469.268: 88.3310% ( 81) 00:11:20.713 8469.268 - 8519.680: 88.6997% ( 63) 00:11:20.713 8519.680 - 8570.092: 89.0274% ( 56) 00:11:20.713 8570.092 - 8620.505: 89.3434% ( 54) 00:11:20.713 8620.505 - 8670.917: 89.6945% ( 60) 00:11:20.713 8670.917 - 8721.329: 90.1276% ( 74) 00:11:20.713 8721.329 - 8771.742: 90.4436% ( 54) 00:11:20.713 8771.742 - 8822.154: 90.8298% ( 66) 00:11:20.713 8822.154 - 8872.566: 91.1809% ( 60) 00:11:20.713 8872.566 - 8922.978: 91.4618% ( 48) 00:11:20.713 8922.978 - 8973.391: 91.6842% ( 38) 00:11:20.713 8973.391 - 9023.803: 91.8773% ( 33) 00:11:20.713 9023.803 - 9074.215: 92.1231% ( 42) 00:11:20.713 9074.215 - 9124.628: 92.2753% ( 26) 00:11:20.713 9124.628 - 9175.040: 92.4684% ( 33) 00:11:20.713 9175.040 - 9225.452: 92.6732% ( 35) 00:11:20.713 9225.452 - 9275.865: 92.8663% ( 33) 00:11:20.713 9275.865 - 9326.277: 93.0185% ( 26) 00:11:20.713 9326.277 - 9376.689: 93.1765% ( 27) 00:11:20.713 9376.689 - 9427.102: 93.3111% ( 23) 00:11:20.713 9427.102 - 9477.514: 93.4632% ( 26) 00:11:20.713 9477.514 - 9527.926: 93.5803% ( 20) 00:11:20.713 9527.926 - 9578.338: 93.7090% ( 22) 00:11:20.713 9578.338 - 9628.751: 93.7734% ( 11) 00:11:20.713 9628.751 - 9679.163: 93.9373% ( 28) 00:11:20.713 9679.163 - 9729.575: 94.0426% ( 18) 00:11:20.713 9729.575 - 9779.988: 94.0953% ( 9) 00:11:20.713 9779.988 - 9830.400: 94.2006% ( 18) 00:11:20.713 9830.400 - 9880.812: 94.2767% ( 13) 00:11:20.713 9880.812 - 9931.225: 94.3469% ( 12) 00:11:20.713 9931.225 - 9981.637: 94.4288% ( 14) 00:11:20.713 9981.637 - 10032.049: 94.4932% ( 11) 00:11:20.713 10032.049 - 10082.462: 94.6161% ( 21) 00:11:20.713 10082.462 - 10132.874: 94.7390% ( 21) 00:11:20.713 10132.874 - 10183.286: 94.8736% ( 23) 00:11:20.713 10183.286 - 10233.698: 94.9555% ( 14) 00:11:20.713 10233.698 - 10284.111: 95.0609% ( 18) 00:11:20.713 10284.111 - 10334.523: 95.2189% ( 27) 00:11:20.713 10334.523 - 10384.935: 95.2481% ( 5) 00:11:20.713 10384.935 - 10435.348: 95.2774% ( 5) 00:11:20.713 10435.348 - 10485.760: 95.3359% ( 10) 00:11:20.713 10485.760 - 10536.172: 95.3944% ( 10) 00:11:20.713 10536.172 - 10586.585: 95.4822% ( 15) 00:11:20.713 10586.585 - 10636.997: 95.5758% ( 16) 00:11:20.713 10636.997 - 10687.409: 95.6578% ( 14) 00:11:20.713 10687.409 - 10737.822: 95.7397% ( 14) 00:11:20.713 10737.822 - 10788.234: 95.8450% ( 18) 00:11:20.713 10788.234 - 10838.646: 95.9738% ( 22) 00:11:20.713 10838.646 - 10889.058: 96.0733% ( 17) 00:11:20.713 10889.058 - 10939.471: 96.1728% ( 17) 00:11:20.713 10939.471 - 10989.883: 96.2547% ( 14) 00:11:20.713 10989.883 - 11040.295: 96.3191% ( 11) 00:11:20.713 11040.295 - 11090.708: 96.3717% ( 9) 00:11:20.713 11090.708 - 11141.120: 96.4537% ( 14) 00:11:20.713 11141.120 - 11191.532: 96.5297% ( 13) 00:11:20.713 11191.532 - 11241.945: 96.6175% ( 15) 00:11:20.713 11241.945 - 11292.357: 96.6819% ( 11) 00:11:20.713 11292.357 - 11342.769: 96.7638% ( 14) 00:11:20.713 11342.769 - 11393.182: 96.8340% ( 12) 00:11:20.713 11393.182 - 11443.594: 96.8984% ( 11) 00:11:20.713 11443.594 - 11494.006: 96.9628% ( 11) 00:11:20.713 11494.006 - 11544.418: 97.0272% ( 11) 00:11:20.713 11544.418 - 11594.831: 97.0915% ( 11) 00:11:20.713 11594.831 - 11645.243: 97.1676% ( 13) 00:11:20.713 11645.243 - 11695.655: 97.2320% ( 11) 00:11:20.713 11695.655 - 11746.068: 97.3139% ( 14) 00:11:20.713 11746.068 - 11796.480: 97.4134% ( 17) 00:11:20.713 11796.480 - 11846.892: 97.5012% ( 15) 00:11:20.713 11846.892 - 11897.305: 97.5890% ( 15) 00:11:20.713 11897.305 - 11947.717: 97.6709% ( 14) 00:11:20.713 11947.717 - 11998.129: 97.7879% ( 20) 00:11:20.713 11998.129 - 12048.542: 97.8640% ( 13) 00:11:20.713 12048.542 - 12098.954: 97.9576% ( 16) 00:11:20.713 12098.954 - 12149.366: 98.0454% ( 15) 00:11:20.713 12149.366 - 12199.778: 98.1215% ( 13) 00:11:20.713 12199.778 - 12250.191: 98.2034% ( 14) 00:11:20.713 12250.191 - 12300.603: 98.2971% ( 16) 00:11:20.713 12300.603 - 12351.015: 98.3497% ( 9) 00:11:20.713 12351.015 - 12401.428: 98.4316% ( 14) 00:11:20.713 12401.428 - 12451.840: 98.5019% ( 12) 00:11:20.713 12451.840 - 12502.252: 98.5487% ( 8) 00:11:20.713 12502.252 - 12552.665: 98.6306% ( 14) 00:11:20.713 12552.665 - 12603.077: 98.6833% ( 9) 00:11:20.713 12603.077 - 12653.489: 98.7477% ( 11) 00:11:20.713 12653.489 - 12703.902: 98.8062% ( 10) 00:11:20.713 12703.902 - 12754.314: 98.8530% ( 8) 00:11:20.713 12754.314 - 12804.726: 98.8881% ( 6) 00:11:20.713 12804.726 - 12855.138: 98.9115% ( 4) 00:11:20.713 12855.138 - 12905.551: 98.9583% ( 8) 00:11:20.713 12905.551 - 13006.375: 99.0461% ( 15) 00:11:20.713 13006.375 - 13107.200: 99.1105% ( 11) 00:11:20.713 13107.200 - 13208.025: 99.1632% ( 9) 00:11:20.713 13208.025 - 13308.849: 99.2041% ( 7) 00:11:20.713 13308.849 - 13409.674: 99.2275% ( 4) 00:11:20.713 13409.674 - 13510.498: 99.2509% ( 4) 00:11:20.713 26012.751 - 26214.400: 99.2626% ( 2) 00:11:20.713 26214.400 - 26416.049: 99.3095% ( 8) 00:11:20.713 26416.049 - 26617.698: 99.3680% ( 10) 00:11:20.713 26617.698 - 26819.348: 99.4616% ( 16) 00:11:20.713 26819.348 - 27020.997: 99.5201% ( 10) 00:11:20.713 27020.997 - 27222.646: 99.5552% ( 6) 00:11:20.713 27222.646 - 27424.295: 99.5904% ( 6) 00:11:20.713 27424.295 - 27625.945: 99.6079% ( 3) 00:11:20.713 27625.945 - 27827.594: 99.6255% ( 3) 00:11:20.713 30449.034 - 30650.683: 99.6489% ( 4) 00:11:20.713 30650.683 - 30852.332: 99.6957% ( 8) 00:11:20.713 30852.332 - 31053.982: 99.7367% ( 7) 00:11:20.713 31053.982 - 31255.631: 99.7776% ( 7) 00:11:20.713 31255.631 - 31457.280: 99.8244% ( 8) 00:11:20.713 31457.280 - 31658.929: 99.8654% ( 7) 00:11:20.713 31658.929 - 31860.578: 99.9064% ( 7) 00:11:20.713 31860.578 - 32062.228: 99.9532% ( 8) 00:11:20.713 32062.228 - 32263.877: 99.9941% ( 7) 00:11:20.713 32263.877 - 32465.526: 100.0000% ( 1) 00:11:20.713 00:11:20.713 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:11:20.713 ============================================================================== 00:11:20.713 Range in us Cumulative IO count 00:11:20.713 5797.415 - 5822.622: 0.0059% ( 1) 00:11:20.713 5822.622 - 5847.828: 0.0117% ( 1) 00:11:20.713 5923.446 - 5948.652: 0.0293% ( 3) 00:11:20.713 5948.652 - 5973.858: 0.0527% ( 4) 00:11:20.713 5973.858 - 5999.065: 0.0702% ( 3) 00:11:20.713 5999.065 - 6024.271: 0.1229% ( 9) 00:11:20.713 6024.271 - 6049.477: 0.1873% ( 11) 00:11:20.713 6049.477 - 6074.683: 0.2692% ( 14) 00:11:20.713 6074.683 - 6099.889: 0.5267% ( 44) 00:11:20.713 6099.889 - 6125.095: 0.7432% ( 37) 00:11:20.713 6125.095 - 6150.302: 0.9129% ( 29) 00:11:20.713 6150.302 - 6175.508: 1.1002% ( 32) 00:11:20.713 6175.508 - 6200.714: 1.3226% ( 38) 00:11:20.713 6200.714 - 6225.920: 1.4864% ( 28) 00:11:20.713 6225.920 - 6251.126: 1.7498% ( 45) 00:11:20.713 6251.126 - 6276.332: 2.1536% ( 69) 00:11:20.713 6276.332 - 6301.538: 2.6978% ( 93) 00:11:20.713 6301.538 - 6326.745: 3.7980% ( 188) 00:11:20.713 6326.745 - 6351.951: 4.4593% ( 113) 00:11:20.713 6351.951 - 6377.157: 4.9860% ( 90) 00:11:20.713 6377.157 - 6402.363: 5.7409% ( 129) 00:11:20.713 6402.363 - 6427.569: 6.6070% ( 148) 00:11:20.713 6427.569 - 6452.775: 7.4672% ( 147) 00:11:20.713 6452.775 - 6503.188: 9.1117% ( 281) 00:11:20.713 6503.188 - 6553.600: 11.3179% ( 377) 00:11:20.713 6553.600 - 6604.012: 13.5124% ( 375) 00:11:20.713 6604.012 - 6654.425: 16.2629% ( 470) 00:11:20.713 6654.425 - 6704.837: 21.1669% ( 838) 00:11:20.713 6704.837 - 6755.249: 26.2231% ( 864) 00:11:20.713 6755.249 - 6805.662: 31.7474% ( 944) 00:11:20.713 6805.662 - 6856.074: 37.0260% ( 902) 00:11:20.713 6856.074 - 6906.486: 41.8890% ( 831) 00:11:20.713 6906.486 - 6956.898: 47.4719% ( 954) 00:11:20.713 6956.898 - 7007.311: 52.9787% ( 941) 00:11:20.713 7007.311 - 7057.723: 56.9464% ( 678) 00:11:20.713 7057.723 - 7108.135: 60.5220% ( 611) 00:11:20.713 7108.135 - 7158.548: 63.1613% ( 451) 00:11:20.714 7158.548 - 7208.960: 65.4611% ( 393) 00:11:20.714 7208.960 - 7259.372: 67.4567% ( 341) 00:11:20.714 7259.372 - 7309.785: 69.2006% ( 298) 00:11:20.714 7309.785 - 7360.197: 70.6051% ( 240) 00:11:20.714 7360.197 - 7410.609: 71.8867% ( 219) 00:11:20.714 7410.609 - 7461.022: 73.1917% ( 223) 00:11:20.714 7461.022 - 7511.434: 74.4558% ( 216) 00:11:20.714 7511.434 - 7561.846: 75.6905% ( 211) 00:11:20.714 7561.846 - 7612.258: 76.9136% ( 209) 00:11:20.714 7612.258 - 7662.671: 78.2772% ( 233) 00:11:20.714 7662.671 - 7713.083: 79.2252% ( 162) 00:11:20.714 7713.083 - 7763.495: 80.2961% ( 183) 00:11:20.714 7763.495 - 7813.908: 81.2149% ( 157) 00:11:20.714 7813.908 - 7864.320: 82.1863% ( 166) 00:11:20.714 7864.320 - 7914.732: 82.9881% ( 137) 00:11:20.714 7914.732 - 7965.145: 83.7488% ( 130) 00:11:20.714 7965.145 - 8015.557: 84.2463% ( 85) 00:11:20.714 8015.557 - 8065.969: 84.7027% ( 78) 00:11:20.714 8065.969 - 8116.382: 85.1416% ( 75) 00:11:20.714 8116.382 - 8166.794: 85.6332% ( 84) 00:11:20.714 8166.794 - 8217.206: 86.2360% ( 103) 00:11:20.714 8217.206 - 8267.618: 86.6749% ( 75) 00:11:20.714 8267.618 - 8318.031: 87.1606% ( 83) 00:11:20.714 8318.031 - 8368.443: 87.5117% ( 60) 00:11:20.714 8368.443 - 8418.855: 87.8628% ( 60) 00:11:20.714 8418.855 - 8469.268: 88.3603% ( 85) 00:11:20.714 8469.268 - 8519.680: 88.6880% ( 56) 00:11:20.714 8519.680 - 8570.092: 89.0684% ( 65) 00:11:20.714 8570.092 - 8620.505: 89.5014% ( 74) 00:11:20.714 8620.505 - 8670.917: 89.8350% ( 57) 00:11:20.714 8670.917 - 8721.329: 90.0983% ( 45) 00:11:20.714 8721.329 - 8771.742: 90.3792% ( 48) 00:11:20.714 8771.742 - 8822.154: 90.6777% ( 51) 00:11:20.714 8822.154 - 8872.566: 90.9469% ( 46) 00:11:20.714 8872.566 - 8922.978: 91.3858% ( 75) 00:11:20.714 8922.978 - 8973.391: 91.6725% ( 49) 00:11:20.714 8973.391 - 9023.803: 91.9827% ( 53) 00:11:20.714 9023.803 - 9074.215: 92.3631% ( 65) 00:11:20.714 9074.215 - 9124.628: 92.5152% ( 26) 00:11:20.714 9124.628 - 9175.040: 92.7259% ( 36) 00:11:20.714 9175.040 - 9225.452: 92.8429% ( 20) 00:11:20.714 9225.452 - 9275.865: 92.9541% ( 19) 00:11:20.714 9275.865 - 9326.277: 93.0478% ( 16) 00:11:20.714 9326.277 - 9376.689: 93.1121% ( 11) 00:11:20.714 9376.689 - 9427.102: 93.1706% ( 10) 00:11:20.714 9427.102 - 9477.514: 93.2818% ( 19) 00:11:20.714 9477.514 - 9527.926: 93.4164% ( 23) 00:11:20.714 9527.926 - 9578.338: 93.4632% ( 8) 00:11:20.714 9578.338 - 9628.751: 93.5101% ( 8) 00:11:20.714 9628.751 - 9679.163: 93.5627% ( 9) 00:11:20.714 9679.163 - 9729.575: 93.6037% ( 7) 00:11:20.714 9729.575 - 9779.988: 93.6388% ( 6) 00:11:20.714 9779.988 - 9830.400: 93.6798% ( 7) 00:11:20.714 9830.400 - 9880.812: 93.7383% ( 10) 00:11:20.714 9880.812 - 9931.225: 93.8144% ( 13) 00:11:20.714 9931.225 - 9981.637: 93.8787% ( 11) 00:11:20.714 9981.637 - 10032.049: 93.9139% ( 6) 00:11:20.714 10032.049 - 10082.462: 93.9958% ( 14) 00:11:20.714 10082.462 - 10132.874: 94.1362% ( 24) 00:11:20.714 10132.874 - 10183.286: 94.2767% ( 24) 00:11:20.714 10183.286 - 10233.698: 94.4171% ( 24) 00:11:20.714 10233.698 - 10284.111: 94.5166% ( 17) 00:11:20.714 10284.111 - 10334.523: 94.6454% ( 22) 00:11:20.714 10334.523 - 10384.935: 94.8853% ( 41) 00:11:20.714 10384.935 - 10435.348: 95.0667% ( 31) 00:11:20.714 10435.348 - 10485.760: 95.2598% ( 33) 00:11:20.714 10485.760 - 10536.172: 95.3769% ( 20) 00:11:20.714 10536.172 - 10586.585: 95.4939% ( 20) 00:11:20.714 10586.585 - 10636.997: 95.6285% ( 23) 00:11:20.714 10636.997 - 10687.409: 95.7456% ( 20) 00:11:20.714 10687.409 - 10737.822: 96.0850% ( 58) 00:11:20.714 10737.822 - 10788.234: 96.3132% ( 39) 00:11:20.714 10788.234 - 10838.646: 96.4771% ( 28) 00:11:20.714 10838.646 - 10889.058: 96.5590% ( 14) 00:11:20.714 10889.058 - 10939.471: 96.6702% ( 19) 00:11:20.714 10939.471 - 10989.883: 96.7638% ( 16) 00:11:20.714 10989.883 - 11040.295: 96.8750% ( 19) 00:11:20.714 11040.295 - 11090.708: 96.9569% ( 14) 00:11:20.714 11090.708 - 11141.120: 97.0213% ( 11) 00:11:20.714 11141.120 - 11191.532: 97.1091% ( 15) 00:11:20.714 11191.532 - 11241.945: 97.1559% ( 8) 00:11:20.714 11241.945 - 11292.357: 97.2144% ( 10) 00:11:20.714 11292.357 - 11342.769: 97.2729% ( 10) 00:11:20.714 11342.769 - 11393.182: 97.3373% ( 11) 00:11:20.714 11393.182 - 11443.594: 97.4017% ( 11) 00:11:20.714 11443.594 - 11494.006: 97.4426% ( 7) 00:11:20.714 11494.006 - 11544.418: 97.4719% ( 5) 00:11:20.714 11544.418 - 11594.831: 97.4895% ( 3) 00:11:20.714 11594.831 - 11645.243: 97.5129% ( 4) 00:11:20.714 11645.243 - 11695.655: 97.5246% ( 2) 00:11:20.714 11695.655 - 11746.068: 97.5421% ( 3) 00:11:20.714 11746.068 - 11796.480: 97.5655% ( 4) 00:11:20.714 11796.480 - 11846.892: 97.5948% ( 5) 00:11:20.714 11846.892 - 11897.305: 97.6475% ( 9) 00:11:20.714 11897.305 - 11947.717: 97.6943% ( 8) 00:11:20.714 11947.717 - 11998.129: 97.7294% ( 6) 00:11:20.714 11998.129 - 12048.542: 97.7879% ( 10) 00:11:20.714 12048.542 - 12098.954: 97.8581% ( 12) 00:11:20.714 12098.954 - 12149.366: 97.8874% ( 5) 00:11:20.714 12149.366 - 12199.778: 97.9342% ( 8) 00:11:20.714 12199.778 - 12250.191: 97.9752% ( 7) 00:11:20.714 12250.191 - 12300.603: 98.0279% ( 9) 00:11:20.714 12300.603 - 12351.015: 98.0688% ( 7) 00:11:20.714 12351.015 - 12401.428: 98.1098% ( 7) 00:11:20.714 12401.428 - 12451.840: 98.1566% ( 8) 00:11:20.714 12451.840 - 12502.252: 98.1976% ( 7) 00:11:20.714 12502.252 - 12552.665: 98.2268% ( 5) 00:11:20.714 12552.665 - 12603.077: 98.2678% ( 7) 00:11:20.714 12603.077 - 12653.489: 98.3029% ( 6) 00:11:20.714 12653.489 - 12703.902: 98.3439% ( 7) 00:11:20.714 12703.902 - 12754.314: 98.3907% ( 8) 00:11:20.714 12754.314 - 12804.726: 98.4375% ( 8) 00:11:20.714 12804.726 - 12855.138: 98.4902% ( 9) 00:11:20.714 12855.138 - 12905.551: 98.5545% ( 11) 00:11:20.714 12905.551 - 13006.375: 98.7477% ( 33) 00:11:20.714 13006.375 - 13107.200: 98.7886% ( 7) 00:11:20.714 13107.200 - 13208.025: 98.8413% ( 9) 00:11:20.714 13208.025 - 13308.849: 98.8998% ( 10) 00:11:20.714 13308.849 - 13409.674: 98.9642% ( 11) 00:11:20.714 13409.674 - 13510.498: 98.9817% ( 3) 00:11:20.714 13510.498 - 13611.323: 99.0110% ( 5) 00:11:20.714 13611.323 - 13712.148: 99.0812% ( 12) 00:11:20.714 13712.148 - 13812.972: 99.2334% ( 26) 00:11:20.714 13812.972 - 13913.797: 99.2509% ( 3) 00:11:20.714 24298.732 - 24399.557: 99.2685% ( 3) 00:11:20.714 24399.557 - 24500.382: 99.2919% ( 4) 00:11:20.714 24500.382 - 24601.206: 99.3153% ( 4) 00:11:20.714 24601.206 - 24702.031: 99.3446% ( 5) 00:11:20.714 24702.031 - 24802.855: 99.3621% ( 3) 00:11:20.714 24802.855 - 24903.680: 99.3914% ( 5) 00:11:20.714 24903.680 - 25004.505: 99.4148% ( 4) 00:11:20.714 25004.505 - 25105.329: 99.4382% ( 4) 00:11:20.714 25105.329 - 25206.154: 99.4675% ( 5) 00:11:20.714 25206.154 - 25306.978: 99.4909% ( 4) 00:11:20.714 25306.978 - 25407.803: 99.5084% ( 3) 00:11:20.714 25407.803 - 25508.628: 99.5318% ( 4) 00:11:20.714 25508.628 - 25609.452: 99.5552% ( 4) 00:11:20.714 25609.452 - 25710.277: 99.5787% ( 4) 00:11:20.714 25710.277 - 25811.102: 99.6021% ( 4) 00:11:20.714 25811.102 - 26012.751: 99.6255% ( 4) 00:11:20.714 28835.840 - 29037.489: 99.6664% ( 7) 00:11:20.714 29037.489 - 29239.138: 99.7132% ( 8) 00:11:20.714 29239.138 - 29440.788: 99.7484% ( 6) 00:11:20.714 29440.788 - 29642.437: 99.8010% ( 9) 00:11:20.714 29642.437 - 29844.086: 99.8478% ( 8) 00:11:20.714 29844.086 - 30045.735: 99.8947% ( 8) 00:11:20.714 30045.735 - 30247.385: 99.9356% ( 7) 00:11:20.714 30247.385 - 30449.034: 99.9824% ( 8) 00:11:20.714 30449.034 - 30650.683: 100.0000% ( 3) 00:11:20.714 00:11:20.714 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:11:20.714 ============================================================================== 00:11:20.714 Range in us Cumulative IO count 00:11:20.714 5822.622 - 5847.828: 0.0059% ( 1) 00:11:20.714 5873.034 - 5898.240: 0.0234% ( 3) 00:11:20.714 5898.240 - 5923.446: 0.0468% ( 4) 00:11:20.714 5923.446 - 5948.652: 0.0585% ( 2) 00:11:20.714 5948.652 - 5973.858: 0.1053% ( 8) 00:11:20.714 5973.858 - 5999.065: 0.1404% ( 6) 00:11:20.714 5999.065 - 6024.271: 0.2165% ( 13) 00:11:20.714 6024.271 - 6049.477: 0.2750% ( 10) 00:11:20.714 6049.477 - 6074.683: 0.3511% ( 13) 00:11:20.714 6074.683 - 6099.889: 0.5969% ( 42) 00:11:20.714 6099.889 - 6125.095: 0.7432% ( 25) 00:11:20.714 6125.095 - 6150.302: 0.8720% ( 22) 00:11:20.714 6150.302 - 6175.508: 1.0300% ( 27) 00:11:20.714 6175.508 - 6200.714: 1.1880% ( 27) 00:11:20.714 6200.714 - 6225.920: 1.4162% ( 39) 00:11:20.714 6225.920 - 6251.126: 1.7264% ( 53) 00:11:20.714 6251.126 - 6276.332: 2.0950% ( 63) 00:11:20.714 6276.332 - 6301.538: 2.4871% ( 67) 00:11:20.714 6301.538 - 6326.745: 2.9143% ( 73) 00:11:20.714 6326.745 - 6351.951: 3.3357% ( 72) 00:11:20.714 6351.951 - 6377.157: 3.9911% ( 112) 00:11:20.714 6377.157 - 6402.363: 4.5822% ( 101) 00:11:20.714 6402.363 - 6427.569: 5.3137% ( 125) 00:11:20.714 6427.569 - 6452.775: 6.2207% ( 155) 00:11:20.714 6452.775 - 6503.188: 8.2924% ( 354) 00:11:20.714 6503.188 - 6553.600: 10.2235% ( 330) 00:11:20.714 6553.600 - 6604.012: 13.1964% ( 508) 00:11:20.714 6604.012 - 6654.425: 16.9885% ( 648) 00:11:20.714 6654.425 - 6704.837: 20.5875% ( 615) 00:11:20.715 6704.837 - 6755.249: 24.3095% ( 636) 00:11:20.715 6755.249 - 6805.662: 29.6875% ( 919) 00:11:20.715 6805.662 - 6856.074: 34.3691% ( 800) 00:11:20.715 6856.074 - 6906.486: 39.9520% ( 954) 00:11:20.715 6906.486 - 6956.898: 45.9211% ( 1020) 00:11:20.715 6956.898 - 7007.311: 51.0826% ( 882) 00:11:20.715 7007.311 - 7057.723: 55.6882% ( 787) 00:11:20.715 7057.723 - 7108.135: 60.0948% ( 753) 00:11:20.715 7108.135 - 7158.548: 63.4831% ( 579) 00:11:20.715 7158.548 - 7208.960: 66.2044% ( 465) 00:11:20.715 7208.960 - 7259.372: 68.2175% ( 344) 00:11:20.715 7259.372 - 7309.785: 69.8677% ( 282) 00:11:20.715 7309.785 - 7360.197: 71.6117% ( 298) 00:11:20.715 7360.197 - 7410.609: 73.4258% ( 310) 00:11:20.715 7410.609 - 7461.022: 74.8478% ( 243) 00:11:20.715 7461.022 - 7511.434: 76.3109% ( 250) 00:11:20.715 7511.434 - 7561.846: 77.9143% ( 274) 00:11:20.715 7561.846 - 7612.258: 78.9384% ( 175) 00:11:20.715 7612.258 - 7662.671: 79.8982% ( 164) 00:11:20.715 7662.671 - 7713.083: 80.8345% ( 160) 00:11:20.715 7713.083 - 7763.495: 81.5836% ( 128) 00:11:20.715 7763.495 - 7813.908: 82.3560% ( 132) 00:11:20.715 7813.908 - 7864.320: 83.1285% ( 132) 00:11:20.715 7864.320 - 7914.732: 83.5382% ( 70) 00:11:20.715 7914.732 - 7965.145: 84.0590% ( 89) 00:11:20.715 7965.145 - 8015.557: 84.3750% ( 54) 00:11:20.715 8015.557 - 8065.969: 84.8081% ( 74) 00:11:20.715 8065.969 - 8116.382: 85.3523% ( 93) 00:11:20.715 8116.382 - 8166.794: 85.8322% ( 82) 00:11:20.715 8166.794 - 8217.206: 86.2535% ( 72) 00:11:20.715 8217.206 - 8267.618: 86.5403% ( 49) 00:11:20.715 8267.618 - 8318.031: 86.8621% ( 55) 00:11:20.715 8318.031 - 8368.443: 87.3420% ( 82) 00:11:20.715 8368.443 - 8418.855: 87.6639% ( 55) 00:11:20.715 8418.855 - 8469.268: 88.1847% ( 89) 00:11:20.715 8469.268 - 8519.680: 88.5007% ( 54) 00:11:20.715 8519.680 - 8570.092: 88.7699% ( 46) 00:11:20.715 8570.092 - 8620.505: 89.0859% ( 54) 00:11:20.715 8620.505 - 8670.917: 89.4078% ( 55) 00:11:20.715 8670.917 - 8721.329: 89.7940% ( 66) 00:11:20.715 8721.329 - 8771.742: 90.2388% ( 76) 00:11:20.715 8771.742 - 8822.154: 90.5606% ( 55) 00:11:20.715 8822.154 - 8872.566: 91.0522% ( 84) 00:11:20.715 8872.566 - 8922.978: 91.2746% ( 38) 00:11:20.715 8922.978 - 8973.391: 91.5028% ( 39) 00:11:20.715 8973.391 - 9023.803: 91.7544% ( 43) 00:11:20.715 9023.803 - 9074.215: 91.9359% ( 31) 00:11:20.715 9074.215 - 9124.628: 92.1114% ( 30) 00:11:20.715 9124.628 - 9175.040: 92.2402% ( 22) 00:11:20.715 9175.040 - 9225.452: 92.3514% ( 19) 00:11:20.715 9225.452 - 9275.865: 92.4743% ( 21) 00:11:20.715 9275.865 - 9326.277: 92.5854% ( 19) 00:11:20.715 9326.277 - 9376.689: 92.7142% ( 22) 00:11:20.715 9376.689 - 9427.102: 92.8254% ( 19) 00:11:20.715 9427.102 - 9477.514: 92.9366% ( 19) 00:11:20.715 9477.514 - 9527.926: 93.0126% ( 13) 00:11:20.715 9527.926 - 9578.338: 93.1004% ( 15) 00:11:20.715 9578.338 - 9628.751: 93.1706% ( 12) 00:11:20.715 9628.751 - 9679.163: 93.2526% ( 14) 00:11:20.715 9679.163 - 9729.575: 93.3462% ( 16) 00:11:20.715 9729.575 - 9779.988: 93.4632% ( 20) 00:11:20.715 9779.988 - 9830.400: 93.5920% ( 22) 00:11:20.715 9830.400 - 9880.812: 93.6798% ( 15) 00:11:20.715 9880.812 - 9931.225: 93.7559% ( 13) 00:11:20.715 9931.225 - 9981.637: 93.8904% ( 23) 00:11:20.715 9981.637 - 10032.049: 94.0485% ( 27) 00:11:20.715 10032.049 - 10082.462: 94.2357% ( 32) 00:11:20.715 10082.462 - 10132.874: 94.3937% ( 27) 00:11:20.715 10132.874 - 10183.286: 94.5342% ( 24) 00:11:20.715 10183.286 - 10233.698: 94.8151% ( 48) 00:11:20.715 10233.698 - 10284.111: 95.0023% ( 32) 00:11:20.715 10284.111 - 10334.523: 95.2774% ( 47) 00:11:20.715 10334.523 - 10384.935: 95.4120% ( 23) 00:11:20.715 10384.935 - 10435.348: 95.5700% ( 27) 00:11:20.715 10435.348 - 10485.760: 95.6870% ( 20) 00:11:20.715 10485.760 - 10536.172: 95.8216% ( 23) 00:11:20.715 10536.172 - 10586.585: 95.9738% ( 26) 00:11:20.715 10586.585 - 10636.997: 96.0908% ( 20) 00:11:20.715 10636.997 - 10687.409: 96.2079% ( 20) 00:11:20.715 10687.409 - 10737.822: 96.2839% ( 13) 00:11:20.715 10737.822 - 10788.234: 96.3834% ( 17) 00:11:20.715 10788.234 - 10838.646: 96.5063% ( 21) 00:11:20.715 10838.646 - 10889.058: 96.6351% ( 22) 00:11:20.715 10889.058 - 10939.471: 96.7755% ( 24) 00:11:20.715 10939.471 - 10989.883: 96.8926% ( 20) 00:11:20.715 10989.883 - 11040.295: 96.9452% ( 9) 00:11:20.715 11040.295 - 11090.708: 97.0154% ( 12) 00:11:20.715 11090.708 - 11141.120: 97.1032% ( 15) 00:11:20.715 11141.120 - 11191.532: 97.1676% ( 11) 00:11:20.715 11191.532 - 11241.945: 97.2495% ( 14) 00:11:20.715 11241.945 - 11292.357: 97.2963% ( 8) 00:11:20.715 11292.357 - 11342.769: 97.3315% ( 6) 00:11:20.715 11342.769 - 11393.182: 97.3666% ( 6) 00:11:20.715 11393.182 - 11443.594: 97.3958% ( 5) 00:11:20.715 11443.594 - 11494.006: 97.4075% ( 2) 00:11:20.715 11494.006 - 11544.418: 97.4368% ( 5) 00:11:20.715 11544.418 - 11594.831: 97.4485% ( 2) 00:11:20.715 11594.831 - 11645.243: 97.4602% ( 2) 00:11:20.715 11645.243 - 11695.655: 97.4719% ( 2) 00:11:20.715 11695.655 - 11746.068: 97.4895% ( 3) 00:11:20.715 11746.068 - 11796.480: 97.5012% ( 2) 00:11:20.715 11796.480 - 11846.892: 97.5187% ( 3) 00:11:20.715 11846.892 - 11897.305: 97.5363% ( 3) 00:11:20.715 11897.305 - 11947.717: 97.5831% ( 8) 00:11:20.715 11947.717 - 11998.129: 97.6299% ( 8) 00:11:20.715 11998.129 - 12048.542: 97.6884% ( 10) 00:11:20.715 12048.542 - 12098.954: 97.7821% ( 16) 00:11:20.715 12098.954 - 12149.366: 97.8581% ( 13) 00:11:20.715 12149.366 - 12199.778: 97.9050% ( 8) 00:11:20.715 12199.778 - 12250.191: 97.9693% ( 11) 00:11:20.715 12250.191 - 12300.603: 98.0162% ( 8) 00:11:20.715 12300.603 - 12351.015: 98.0747% ( 10) 00:11:20.715 12351.015 - 12401.428: 98.1215% ( 8) 00:11:20.715 12401.428 - 12451.840: 98.1683% ( 8) 00:11:20.715 12451.840 - 12502.252: 98.2034% ( 6) 00:11:20.715 12502.252 - 12552.665: 98.2444% ( 7) 00:11:20.715 12552.665 - 12603.077: 98.2853% ( 7) 00:11:20.715 12603.077 - 12653.489: 98.3205% ( 6) 00:11:20.715 12653.489 - 12703.902: 98.3614% ( 7) 00:11:20.715 12703.902 - 12754.314: 98.4024% ( 7) 00:11:20.715 12754.314 - 12804.726: 98.4434% ( 7) 00:11:20.715 12804.726 - 12855.138: 98.4668% ( 4) 00:11:20.715 12855.138 - 12905.551: 98.5019% ( 6) 00:11:20.715 12905.551 - 13006.375: 98.5897% ( 15) 00:11:20.715 13006.375 - 13107.200: 98.8062% ( 37) 00:11:20.715 13107.200 - 13208.025: 98.8413% ( 6) 00:11:20.715 13208.025 - 13308.849: 98.8764% ( 6) 00:11:20.715 13308.849 - 13409.674: 98.8998% ( 4) 00:11:20.715 13409.674 - 13510.498: 98.9525% ( 9) 00:11:20.715 13510.498 - 13611.323: 99.0227% ( 12) 00:11:20.715 13611.323 - 13712.148: 99.1632% ( 24) 00:11:20.715 13712.148 - 13812.972: 99.1983% ( 6) 00:11:20.715 13812.972 - 13913.797: 99.2334% ( 6) 00:11:20.715 13913.797 - 14014.622: 99.2509% ( 3) 00:11:20.715 22584.714 - 22685.538: 99.2743% ( 4) 00:11:20.715 22685.538 - 22786.363: 99.2978% ( 4) 00:11:20.715 22786.363 - 22887.188: 99.3153% ( 3) 00:11:20.715 22887.188 - 22988.012: 99.3387% ( 4) 00:11:20.715 22988.012 - 23088.837: 99.3621% ( 4) 00:11:20.715 23088.837 - 23189.662: 99.3855% ( 4) 00:11:20.715 23189.662 - 23290.486: 99.4089% ( 4) 00:11:20.715 23290.486 - 23391.311: 99.4324% ( 4) 00:11:20.715 23391.311 - 23492.135: 99.4558% ( 4) 00:11:20.715 23492.135 - 23592.960: 99.4733% ( 3) 00:11:20.715 23592.960 - 23693.785: 99.4967% ( 4) 00:11:20.715 23693.785 - 23794.609: 99.5201% ( 4) 00:11:20.715 23794.609 - 23895.434: 99.5435% ( 4) 00:11:20.715 23895.434 - 23996.258: 99.5669% ( 4) 00:11:20.715 23996.258 - 24097.083: 99.5904% ( 4) 00:11:20.715 24097.083 - 24197.908: 99.6138% ( 4) 00:11:20.715 24197.908 - 24298.732: 99.6255% ( 2) 00:11:20.715 27020.997 - 27222.646: 99.6372% ( 2) 00:11:20.715 27222.646 - 27424.295: 99.6840% ( 8) 00:11:20.715 27424.295 - 27625.945: 99.7250% ( 7) 00:11:20.715 27625.945 - 27827.594: 99.7718% ( 8) 00:11:20.715 27827.594 - 28029.243: 99.8244% ( 9) 00:11:20.715 28029.243 - 28230.892: 99.8713% ( 8) 00:11:20.715 28230.892 - 28432.542: 99.9181% ( 8) 00:11:20.715 28432.542 - 28634.191: 99.9649% ( 8) 00:11:20.715 28634.191 - 28835.840: 100.0000% ( 6) 00:11:20.715 00:11:20.715 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:11:20.715 ============================================================================== 00:11:20.715 Range in us Cumulative IO count 00:11:20.715 5772.209 - 5797.415: 0.0059% ( 1) 00:11:20.715 5847.828 - 5873.034: 0.0234% ( 3) 00:11:20.715 5873.034 - 5898.240: 0.0351% ( 2) 00:11:20.715 5923.446 - 5948.652: 0.0410% ( 1) 00:11:20.715 5948.652 - 5973.858: 0.0468% ( 1) 00:11:20.715 5973.858 - 5999.065: 0.0585% ( 2) 00:11:20.715 5999.065 - 6024.271: 0.0644% ( 1) 00:11:20.715 6024.271 - 6049.477: 0.0878% ( 4) 00:11:20.715 6049.477 - 6074.683: 0.1404% ( 9) 00:11:20.715 6074.683 - 6099.889: 0.2107% ( 12) 00:11:20.715 6099.889 - 6125.095: 0.3219% ( 19) 00:11:20.715 6125.095 - 6150.302: 0.5911% ( 46) 00:11:20.715 6150.302 - 6175.508: 0.8603% ( 46) 00:11:20.715 6175.508 - 6200.714: 1.0826% ( 38) 00:11:20.715 6200.714 - 6225.920: 1.4221% ( 58) 00:11:20.716 6225.920 - 6251.126: 1.8024% ( 65) 00:11:20.716 6251.126 - 6276.332: 2.2179% ( 71) 00:11:20.716 6276.332 - 6301.538: 2.5281% ( 53) 00:11:20.716 6301.538 - 6326.745: 3.1894% ( 113) 00:11:20.716 6326.745 - 6351.951: 3.7219% ( 91) 00:11:20.716 6351.951 - 6377.157: 4.2544% ( 91) 00:11:20.716 6377.157 - 6402.363: 4.9625% ( 121) 00:11:20.716 6402.363 - 6427.569: 5.7935% ( 142) 00:11:20.716 6427.569 - 6452.775: 6.5192% ( 124) 00:11:20.716 6452.775 - 6503.188: 8.3041% ( 305) 00:11:20.716 6503.188 - 6553.600: 10.8263% ( 431) 00:11:20.716 6553.600 - 6604.012: 12.9213% ( 358) 00:11:20.716 6604.012 - 6654.425: 16.3272% ( 582) 00:11:20.716 6654.425 - 6704.837: 20.2072% ( 663) 00:11:20.716 6704.837 - 6755.249: 24.4324% ( 722) 00:11:20.716 6755.249 - 6805.662: 29.1257% ( 802) 00:11:20.716 6805.662 - 6856.074: 33.8249% ( 803) 00:11:20.716 6856.074 - 6906.486: 39.2498% ( 927) 00:11:20.716 6906.486 - 6956.898: 45.4120% ( 1053) 00:11:20.716 6956.898 - 7007.311: 51.1587% ( 982) 00:11:20.716 7007.311 - 7057.723: 56.1330% ( 850) 00:11:20.716 7057.723 - 7108.135: 60.2704% ( 707) 00:11:20.716 7108.135 - 7158.548: 63.4480% ( 543) 00:11:20.716 7158.548 - 7208.960: 66.2921% ( 486) 00:11:20.716 7208.960 - 7259.372: 68.2526% ( 335) 00:11:20.716 7259.372 - 7309.785: 70.0199% ( 302) 00:11:20.716 7309.785 - 7360.197: 71.4888% ( 251) 00:11:20.716 7360.197 - 7410.609: 72.4895% ( 171) 00:11:20.716 7410.609 - 7461.022: 73.8647% ( 235) 00:11:20.716 7461.022 - 7511.434: 75.3628% ( 256) 00:11:20.716 7511.434 - 7561.846: 76.9370% ( 269) 00:11:20.716 7561.846 - 7612.258: 78.4469% ( 258) 00:11:20.716 7612.258 - 7662.671: 79.5119% ( 182) 00:11:20.716 7662.671 - 7713.083: 80.6706% ( 198) 00:11:20.716 7713.083 - 7763.495: 81.4841% ( 139) 00:11:20.716 7763.495 - 7813.908: 82.1863% ( 120) 00:11:20.716 7813.908 - 7864.320: 83.0290% ( 144) 00:11:20.716 7864.320 - 7914.732: 83.7488% ( 123) 00:11:20.716 7914.732 - 7965.145: 84.0648% ( 54) 00:11:20.716 7965.145 - 8015.557: 84.4452% ( 65) 00:11:20.716 8015.557 - 8065.969: 84.8081% ( 62) 00:11:20.716 8065.969 - 8116.382: 85.1358% ( 56) 00:11:20.716 8116.382 - 8166.794: 85.4927% ( 61) 00:11:20.716 8166.794 - 8217.206: 85.8380% ( 59) 00:11:20.716 8217.206 - 8267.618: 86.1891% ( 60) 00:11:20.716 8267.618 - 8318.031: 86.6046% ( 71) 00:11:20.716 8318.031 - 8368.443: 87.2601% ( 112) 00:11:20.716 8368.443 - 8418.855: 87.8160% ( 95) 00:11:20.716 8418.855 - 8469.268: 88.2783% ( 79) 00:11:20.716 8469.268 - 8519.680: 88.6119% ( 57) 00:11:20.716 8519.680 - 8570.092: 89.0215% ( 70) 00:11:20.716 8570.092 - 8620.505: 89.3083% ( 49) 00:11:20.716 8620.505 - 8670.917: 89.6360% ( 56) 00:11:20.716 8670.917 - 8721.329: 89.8350% ( 34) 00:11:20.716 8721.329 - 8771.742: 90.0047% ( 29) 00:11:20.716 8771.742 - 8822.154: 90.2446% ( 41) 00:11:20.716 8822.154 - 8872.566: 90.5138% ( 46) 00:11:20.716 8872.566 - 8922.978: 90.8415% ( 56) 00:11:20.716 8922.978 - 8973.391: 91.2278% ( 66) 00:11:20.716 8973.391 - 9023.803: 91.4092% ( 31) 00:11:20.716 9023.803 - 9074.215: 91.6901% ( 48) 00:11:20.716 9074.215 - 9124.628: 91.8305% ( 24) 00:11:20.716 9124.628 - 9175.040: 91.9885% ( 27) 00:11:20.716 9175.040 - 9225.452: 92.2460% ( 44) 00:11:20.716 9225.452 - 9275.865: 92.4274% ( 31) 00:11:20.716 9275.865 - 9326.277: 92.5445% ( 20) 00:11:20.716 9326.277 - 9376.689: 92.6498% ( 18) 00:11:20.716 9376.689 - 9427.102: 92.7083% ( 10) 00:11:20.716 9427.102 - 9477.514: 92.7786% ( 12) 00:11:20.716 9477.514 - 9527.926: 92.8956% ( 20) 00:11:20.716 9527.926 - 9578.338: 93.0243% ( 22) 00:11:20.716 9578.338 - 9628.751: 93.1765% ( 26) 00:11:20.716 9628.751 - 9679.163: 93.3521% ( 30) 00:11:20.716 9679.163 - 9729.575: 93.5803% ( 39) 00:11:20.716 9729.575 - 9779.988: 93.6681% ( 15) 00:11:20.716 9779.988 - 9830.400: 93.8027% ( 23) 00:11:20.716 9830.400 - 9880.812: 93.9548% ( 26) 00:11:20.716 9880.812 - 9931.225: 94.1304% ( 30) 00:11:20.716 9931.225 - 9981.637: 94.3586% ( 39) 00:11:20.716 9981.637 - 10032.049: 94.5517% ( 33) 00:11:20.716 10032.049 - 10082.462: 94.7273% ( 30) 00:11:20.716 10082.462 - 10132.874: 94.8912% ( 28) 00:11:20.716 10132.874 - 10183.286: 95.0375% ( 25) 00:11:20.716 10183.286 - 10233.698: 95.2540% ( 37) 00:11:20.716 10233.698 - 10284.111: 95.4588% ( 35) 00:11:20.716 10284.111 - 10334.523: 95.5993% ( 24) 00:11:20.716 10334.523 - 10384.935: 95.7280% ( 22) 00:11:20.716 10384.935 - 10435.348: 95.8450% ( 20) 00:11:20.716 10435.348 - 10485.760: 95.9855% ( 24) 00:11:20.716 10485.760 - 10536.172: 96.0850% ( 17) 00:11:20.716 10536.172 - 10586.585: 96.1903% ( 18) 00:11:20.716 10586.585 - 10636.997: 96.2898% ( 17) 00:11:20.716 10636.997 - 10687.409: 96.3542% ( 11) 00:11:20.716 10687.409 - 10737.822: 96.4010% ( 8) 00:11:20.716 10737.822 - 10788.234: 96.4595% ( 10) 00:11:20.716 10788.234 - 10838.646: 96.5005% ( 7) 00:11:20.716 10838.646 - 10889.058: 96.5414% ( 7) 00:11:20.716 10889.058 - 10939.471: 96.5707% ( 5) 00:11:20.716 10939.471 - 10989.883: 96.6058% ( 6) 00:11:20.716 10989.883 - 11040.295: 96.6351% ( 5) 00:11:20.716 11040.295 - 11090.708: 96.6702% ( 6) 00:11:20.716 11090.708 - 11141.120: 96.6760% ( 1) 00:11:20.716 11141.120 - 11191.532: 96.6936% ( 3) 00:11:20.716 11191.532 - 11241.945: 96.7170% ( 4) 00:11:20.716 11241.945 - 11292.357: 96.7580% ( 7) 00:11:20.716 11292.357 - 11342.769: 96.8165% ( 10) 00:11:20.716 11342.769 - 11393.182: 96.9511% ( 23) 00:11:20.716 11393.182 - 11443.594: 97.0681% ( 20) 00:11:20.716 11443.594 - 11494.006: 97.1149% ( 8) 00:11:20.716 11494.006 - 11544.418: 97.2203% ( 18) 00:11:20.716 11544.418 - 11594.831: 97.3724% ( 26) 00:11:20.716 11594.831 - 11645.243: 97.4192% ( 8) 00:11:20.716 11645.243 - 11695.655: 97.4485% ( 5) 00:11:20.716 11695.655 - 11746.068: 97.4778% ( 5) 00:11:20.716 11746.068 - 11796.480: 97.5070% ( 5) 00:11:20.716 11796.480 - 11846.892: 97.5421% ( 6) 00:11:20.716 11846.892 - 11897.305: 97.6299% ( 15) 00:11:20.716 11897.305 - 11947.717: 97.6709% ( 7) 00:11:20.716 11947.717 - 11998.129: 97.7118% ( 7) 00:11:20.716 11998.129 - 12048.542: 97.7704% ( 10) 00:11:20.716 12048.542 - 12098.954: 97.8172% ( 8) 00:11:20.716 12098.954 - 12149.366: 97.8699% ( 9) 00:11:20.716 12149.366 - 12199.778: 97.9342% ( 11) 00:11:20.716 12199.778 - 12250.191: 98.0103% ( 13) 00:11:20.716 12250.191 - 12300.603: 98.0805% ( 12) 00:11:20.716 12300.603 - 12351.015: 98.1742% ( 16) 00:11:20.716 12351.015 - 12401.428: 98.3497% ( 30) 00:11:20.716 12401.428 - 12451.840: 98.4609% ( 19) 00:11:20.716 12451.840 - 12502.252: 98.5370% ( 13) 00:11:20.716 12502.252 - 12552.665: 98.6072% ( 12) 00:11:20.716 12552.665 - 12603.077: 98.6774% ( 12) 00:11:20.716 12603.077 - 12653.489: 98.7184% ( 7) 00:11:20.716 12653.489 - 12703.902: 98.7652% ( 8) 00:11:20.716 12703.902 - 12754.314: 98.7828% ( 3) 00:11:20.716 12754.314 - 12804.726: 98.7945% ( 2) 00:11:20.716 12804.726 - 12855.138: 98.8003% ( 1) 00:11:20.716 12855.138 - 12905.551: 98.8120% ( 2) 00:11:20.716 12905.551 - 13006.375: 98.8296% ( 3) 00:11:20.716 13006.375 - 13107.200: 98.8471% ( 3) 00:11:20.716 13107.200 - 13208.025: 98.8764% ( 5) 00:11:20.716 13208.025 - 13308.849: 98.8940% ( 3) 00:11:20.716 13308.849 - 13409.674: 98.9525% ( 10) 00:11:20.716 13409.674 - 13510.498: 99.1866% ( 40) 00:11:20.716 13510.498 - 13611.323: 99.2275% ( 7) 00:11:20.716 13611.323 - 13712.148: 99.2509% ( 4) 00:11:20.716 20870.695 - 20971.520: 99.2626% ( 2) 00:11:20.716 20971.520 - 21072.345: 99.2860% ( 4) 00:11:20.716 21072.345 - 21173.169: 99.3095% ( 4) 00:11:20.716 21173.169 - 21273.994: 99.3329% ( 4) 00:11:20.716 21273.994 - 21374.818: 99.3563% ( 4) 00:11:20.716 21374.818 - 21475.643: 99.3797% ( 4) 00:11:20.716 21475.643 - 21576.468: 99.4031% ( 4) 00:11:20.716 21576.468 - 21677.292: 99.4206% ( 3) 00:11:20.716 21677.292 - 21778.117: 99.4441% ( 4) 00:11:20.716 21778.117 - 21878.942: 99.4675% ( 4) 00:11:20.716 21878.942 - 21979.766: 99.4909% ( 4) 00:11:20.716 21979.766 - 22080.591: 99.5143% ( 4) 00:11:20.716 22080.591 - 22181.415: 99.5377% ( 4) 00:11:20.716 22181.415 - 22282.240: 99.5611% ( 4) 00:11:20.716 22282.240 - 22383.065: 99.5845% ( 4) 00:11:20.716 22383.065 - 22483.889: 99.6079% ( 4) 00:11:20.716 22483.889 - 22584.714: 99.6255% ( 3) 00:11:20.716 25407.803 - 25508.628: 99.6313% ( 1) 00:11:20.716 25508.628 - 25609.452: 99.6547% ( 4) 00:11:20.716 25609.452 - 25710.277: 99.6781% ( 4) 00:11:20.716 25710.277 - 25811.102: 99.7015% ( 4) 00:11:20.716 25811.102 - 26012.751: 99.7484% ( 8) 00:11:20.716 26012.751 - 26214.400: 99.7952% ( 8) 00:11:20.716 26214.400 - 26416.049: 99.8420% ( 8) 00:11:20.716 26416.049 - 26617.698: 99.8830% ( 7) 00:11:20.716 26617.698 - 26819.348: 99.9298% ( 8) 00:11:20.716 26819.348 - 27020.997: 99.9766% ( 8) 00:11:20.716 27020.997 - 27222.646: 100.0000% ( 4) 00:11:20.716 00:11:20.716 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:11:20.716 ============================================================================== 00:11:20.716 Range in us Cumulative IO count 00:11:20.716 5923.446 - 5948.652: 0.0059% ( 1) 00:11:20.716 5973.858 - 5999.065: 0.0351% ( 5) 00:11:20.716 5999.065 - 6024.271: 0.1170% ( 14) 00:11:20.716 6024.271 - 6049.477: 0.1814% ( 11) 00:11:20.716 6049.477 - 6074.683: 0.2750% ( 16) 00:11:20.717 6074.683 - 6099.889: 0.3804% ( 18) 00:11:20.717 6099.889 - 6125.095: 0.4974% ( 20) 00:11:20.717 6125.095 - 6150.302: 0.6613% ( 28) 00:11:20.717 6150.302 - 6175.508: 0.8544% ( 33) 00:11:20.717 6175.508 - 6200.714: 1.0826% ( 39) 00:11:20.717 6200.714 - 6225.920: 1.3343% ( 43) 00:11:20.717 6225.920 - 6251.126: 1.9312% ( 102) 00:11:20.717 6251.126 - 6276.332: 2.2765% ( 59) 00:11:20.717 6276.332 - 6301.538: 2.9026% ( 107) 00:11:20.717 6301.538 - 6326.745: 3.3766% ( 81) 00:11:20.717 6326.745 - 6351.951: 4.0087% ( 108) 00:11:20.717 6351.951 - 6377.157: 4.6582% ( 111) 00:11:20.717 6377.157 - 6402.363: 5.2961% ( 109) 00:11:20.717 6402.363 - 6427.569: 5.8696% ( 98) 00:11:20.717 6427.569 - 6452.775: 6.6479% ( 133) 00:11:20.717 6452.775 - 6503.188: 8.4153% ( 302) 00:11:20.717 6503.188 - 6553.600: 11.2535% ( 485) 00:11:20.717 6553.600 - 6604.012: 13.5709% ( 396) 00:11:20.717 6604.012 - 6654.425: 16.5672% ( 512) 00:11:20.717 6654.425 - 6704.837: 20.3593% ( 648) 00:11:20.717 6704.837 - 6755.249: 24.0461% ( 630) 00:11:20.717 6755.249 - 6805.662: 28.9501% ( 838) 00:11:20.717 6805.662 - 6856.074: 34.3867% ( 929) 00:11:20.717 6856.074 - 6906.486: 40.0574% ( 969) 00:11:20.717 6906.486 - 6956.898: 44.5459% ( 767) 00:11:20.717 6956.898 - 7007.311: 49.6898% ( 879) 00:11:20.717 7007.311 - 7057.723: 54.9801% ( 904) 00:11:20.717 7057.723 - 7108.135: 59.1585% ( 714) 00:11:20.717 7108.135 - 7158.548: 62.4298% ( 559) 00:11:20.717 7158.548 - 7208.960: 65.5606% ( 535) 00:11:20.717 7208.960 - 7259.372: 68.4925% ( 501) 00:11:20.717 7259.372 - 7309.785: 70.0257% ( 262) 00:11:20.717 7309.785 - 7360.197: 71.1493% ( 192) 00:11:20.717 7360.197 - 7410.609: 72.3022% ( 197) 00:11:20.717 7410.609 - 7461.022: 73.6540% ( 231) 00:11:20.717 7461.022 - 7511.434: 75.1112% ( 249) 00:11:20.717 7511.434 - 7561.846: 76.7498% ( 280) 00:11:20.717 7561.846 - 7612.258: 77.7037% ( 163) 00:11:20.717 7612.258 - 7662.671: 78.5581% ( 146) 00:11:20.717 7662.671 - 7713.083: 79.3188% ( 130) 00:11:20.717 7713.083 - 7763.495: 80.4541% ( 194) 00:11:20.717 7763.495 - 7813.908: 81.4899% ( 177) 00:11:20.717 7813.908 - 7864.320: 82.3677% ( 150) 00:11:20.717 7864.320 - 7914.732: 83.1812% ( 139) 00:11:20.717 7914.732 - 7965.145: 83.9537% ( 132) 00:11:20.717 7965.145 - 8015.557: 84.5096% ( 95) 00:11:20.717 8015.557 - 8065.969: 85.0129% ( 86) 00:11:20.717 8065.969 - 8116.382: 85.4167% ( 69) 00:11:20.717 8116.382 - 8166.794: 85.7853% ( 63) 00:11:20.717 8166.794 - 8217.206: 86.1306% ( 59) 00:11:20.717 8217.206 - 8267.618: 86.4232% ( 50) 00:11:20.717 8267.618 - 8318.031: 86.7100% ( 49) 00:11:20.717 8318.031 - 8368.443: 87.0143% ( 52) 00:11:20.717 8368.443 - 8418.855: 87.3478% ( 57) 00:11:20.717 8418.855 - 8469.268: 87.8336% ( 83) 00:11:20.717 8469.268 - 8519.680: 88.3193% ( 83) 00:11:20.717 8519.680 - 8570.092: 88.8811% ( 96) 00:11:20.717 8570.092 - 8620.505: 89.1795% ( 51) 00:11:20.717 8620.505 - 8670.917: 89.5131% ( 57) 00:11:20.717 8670.917 - 8721.329: 89.8408% ( 56) 00:11:20.717 8721.329 - 8771.742: 90.1217% ( 48) 00:11:20.717 8771.742 - 8822.154: 90.3851% ( 45) 00:11:20.717 8822.154 - 8872.566: 90.6309% ( 42) 00:11:20.717 8872.566 - 8922.978: 90.8532% ( 38) 00:11:20.717 8922.978 - 8973.391: 91.1400% ( 49) 00:11:20.717 8973.391 - 9023.803: 91.4092% ( 46) 00:11:20.717 9023.803 - 9074.215: 91.6959% ( 49) 00:11:20.717 9074.215 - 9124.628: 92.0529% ( 61) 00:11:20.717 9124.628 - 9175.040: 92.3045% ( 43) 00:11:20.717 9175.040 - 9225.452: 92.5094% ( 35) 00:11:20.717 9225.452 - 9275.865: 92.6615% ( 26) 00:11:20.717 9275.865 - 9326.277: 92.8137% ( 26) 00:11:20.717 9326.277 - 9376.689: 92.9600% ( 25) 00:11:20.717 9376.689 - 9427.102: 93.1004% ( 24) 00:11:20.717 9427.102 - 9477.514: 93.3169% ( 37) 00:11:20.717 9477.514 - 9527.926: 93.5452% ( 39) 00:11:20.717 9527.926 - 9578.338: 93.7266% ( 31) 00:11:20.717 9578.338 - 9628.751: 93.8495% ( 21) 00:11:20.717 9628.751 - 9679.163: 93.9958% ( 25) 00:11:20.717 9679.163 - 9729.575: 94.1070% ( 19) 00:11:20.717 9729.575 - 9779.988: 94.3411% ( 40) 00:11:20.717 9779.988 - 9830.400: 94.5634% ( 38) 00:11:20.717 9830.400 - 9880.812: 94.7390% ( 30) 00:11:20.717 9880.812 - 9931.225: 94.8209% ( 14) 00:11:20.717 9931.225 - 9981.637: 94.8970% ( 13) 00:11:20.717 9981.637 - 10032.049: 94.9906% ( 16) 00:11:20.717 10032.049 - 10082.462: 95.0901% ( 17) 00:11:20.717 10082.462 - 10132.874: 95.1545% ( 11) 00:11:20.717 10132.874 - 10183.286: 95.2247% ( 12) 00:11:20.717 10183.286 - 10233.698: 95.2949% ( 12) 00:11:20.717 10233.698 - 10284.111: 95.3476% ( 9) 00:11:20.717 10284.111 - 10334.523: 95.4120% ( 11) 00:11:20.717 10334.523 - 10384.935: 95.4588% ( 8) 00:11:20.717 10384.935 - 10435.348: 95.5349% ( 13) 00:11:20.717 10435.348 - 10485.760: 95.6344% ( 17) 00:11:20.717 10485.760 - 10536.172: 95.7104% ( 13) 00:11:20.717 10536.172 - 10586.585: 95.7631% ( 9) 00:11:20.717 10586.585 - 10636.997: 95.8216% ( 10) 00:11:20.717 10636.997 - 10687.409: 95.8743% ( 9) 00:11:20.717 10687.409 - 10737.822: 95.9094% ( 6) 00:11:20.717 10737.822 - 10788.234: 95.9562% ( 8) 00:11:20.717 10788.234 - 10838.646: 96.0089% ( 9) 00:11:20.717 10838.646 - 10889.058: 96.0382% ( 5) 00:11:20.717 10889.058 - 10939.471: 96.0616% ( 4) 00:11:20.717 10939.471 - 10989.883: 96.0908% ( 5) 00:11:20.717 10989.883 - 11040.295: 96.1552% ( 11) 00:11:20.717 11040.295 - 11090.708: 96.1962% ( 7) 00:11:20.717 11090.708 - 11141.120: 96.2488% ( 9) 00:11:20.717 11141.120 - 11191.532: 96.3015% ( 9) 00:11:20.717 11191.532 - 11241.945: 96.3542% ( 9) 00:11:20.717 11241.945 - 11292.357: 96.4244% ( 12) 00:11:20.717 11292.357 - 11342.769: 96.4829% ( 10) 00:11:20.717 11342.769 - 11393.182: 96.5414% ( 10) 00:11:20.717 11393.182 - 11443.594: 96.6058% ( 11) 00:11:20.717 11443.594 - 11494.006: 96.6585% ( 9) 00:11:20.717 11494.006 - 11544.418: 96.7170% ( 10) 00:11:20.717 11544.418 - 11594.831: 96.8048% ( 15) 00:11:20.717 11594.831 - 11645.243: 96.9394% ( 23) 00:11:20.717 11645.243 - 11695.655: 97.1032% ( 28) 00:11:20.717 11695.655 - 11746.068: 97.4544% ( 60) 00:11:20.717 11746.068 - 11796.480: 97.6299% ( 30) 00:11:20.717 11796.480 - 11846.892: 97.8289% ( 34) 00:11:20.717 11846.892 - 11897.305: 98.0279% ( 34) 00:11:20.717 11897.305 - 11947.717: 98.1390% ( 19) 00:11:20.717 11947.717 - 11998.129: 98.1976% ( 10) 00:11:20.717 11998.129 - 12048.542: 98.2678% ( 12) 00:11:20.717 12048.542 - 12098.954: 98.3263% ( 10) 00:11:20.717 12098.954 - 12149.366: 98.3790% ( 9) 00:11:20.717 12149.366 - 12199.778: 98.4375% ( 10) 00:11:20.717 12199.778 - 12250.191: 98.4785% ( 7) 00:11:20.717 12250.191 - 12300.603: 98.5136% ( 6) 00:11:20.717 12300.603 - 12351.015: 98.5545% ( 7) 00:11:20.717 12351.015 - 12401.428: 98.5955% ( 7) 00:11:20.717 12401.428 - 12451.840: 98.6306% ( 6) 00:11:20.717 12451.840 - 12502.252: 98.6540% ( 4) 00:11:20.717 12502.252 - 12552.665: 98.6891% ( 6) 00:11:20.717 12552.665 - 12603.077: 98.7184% ( 5) 00:11:20.717 12603.077 - 12653.489: 98.7477% ( 5) 00:11:20.717 12653.489 - 12703.902: 98.7886% ( 7) 00:11:20.717 12703.902 - 12754.314: 98.8647% ( 13) 00:11:20.717 12754.314 - 12804.726: 98.9876% ( 21) 00:11:20.717 12804.726 - 12855.138: 99.0637% ( 13) 00:11:20.717 12855.138 - 12905.551: 99.0929% ( 5) 00:11:20.717 12905.551 - 13006.375: 99.1163% ( 4) 00:11:20.717 13006.375 - 13107.200: 99.1339% ( 3) 00:11:20.717 13107.200 - 13208.025: 99.1456% ( 2) 00:11:20.717 13208.025 - 13308.849: 99.1632% ( 3) 00:11:20.717 13308.849 - 13409.674: 99.1807% ( 3) 00:11:20.717 13409.674 - 13510.498: 99.2041% ( 4) 00:11:20.717 13510.498 - 13611.323: 99.2334% ( 5) 00:11:20.717 13611.323 - 13712.148: 99.2509% ( 3) 00:11:20.717 19055.852 - 19156.677: 99.2568% ( 1) 00:11:20.717 19156.677 - 19257.502: 99.2802% ( 4) 00:11:20.717 19257.502 - 19358.326: 99.3036% ( 4) 00:11:20.717 19358.326 - 19459.151: 99.3270% ( 4) 00:11:20.717 19459.151 - 19559.975: 99.3504% ( 4) 00:11:20.717 19559.975 - 19660.800: 99.3738% ( 4) 00:11:20.717 19660.800 - 19761.625: 99.3972% ( 4) 00:11:20.717 19761.625 - 19862.449: 99.4206% ( 4) 00:11:20.717 19862.449 - 19963.274: 99.4441% ( 4) 00:11:20.717 19963.274 - 20064.098: 99.4675% ( 4) 00:11:20.717 20064.098 - 20164.923: 99.4909% ( 4) 00:11:20.717 20164.923 - 20265.748: 99.5084% ( 3) 00:11:20.717 20265.748 - 20366.572: 99.5318% ( 4) 00:11:20.717 20366.572 - 20467.397: 99.5494% ( 3) 00:11:20.717 20467.397 - 20568.222: 99.5728% ( 4) 00:11:20.718 20568.222 - 20669.046: 99.5962% ( 4) 00:11:20.718 20669.046 - 20769.871: 99.6196% ( 4) 00:11:20.718 20769.871 - 20870.695: 99.6255% ( 1) 00:11:20.718 23693.785 - 23794.609: 99.6430% ( 3) 00:11:20.718 23794.609 - 23895.434: 99.6664% ( 4) 00:11:20.718 23895.434 - 23996.258: 99.6898% ( 4) 00:11:20.718 23996.258 - 24097.083: 99.7132% ( 4) 00:11:20.718 24097.083 - 24197.908: 99.7367% ( 4) 00:11:20.718 24197.908 - 24298.732: 99.7601% ( 4) 00:11:20.718 24298.732 - 24399.557: 99.7835% ( 4) 00:11:20.718 24399.557 - 24500.382: 99.8069% ( 4) 00:11:20.718 24500.382 - 24601.206: 99.8303% ( 4) 00:11:20.718 24601.206 - 24702.031: 99.8537% ( 4) 00:11:20.718 24702.031 - 24802.855: 99.8771% ( 4) 00:11:20.718 24802.855 - 24903.680: 99.9005% ( 4) 00:11:20.718 24903.680 - 25004.505: 99.9239% ( 4) 00:11:20.718 25004.505 - 25105.329: 99.9473% ( 4) 00:11:20.718 25105.329 - 25206.154: 99.9707% ( 4) 00:11:20.718 25206.154 - 25306.978: 99.9941% ( 4) 00:11:20.718 25306.978 - 25407.803: 100.0000% ( 1) 00:11:20.718 00:11:20.718 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:11:20.718 ============================================================================== 00:11:20.718 Range in us Cumulative IO count 00:11:20.718 5822.622 - 5847.828: 0.0058% ( 1) 00:11:20.718 5847.828 - 5873.034: 0.0117% ( 1) 00:11:20.718 5898.240 - 5923.446: 0.0233% ( 2) 00:11:20.718 5923.446 - 5948.652: 0.0292% ( 1) 00:11:20.718 5948.652 - 5973.858: 0.0525% ( 4) 00:11:20.718 5973.858 - 5999.065: 0.0991% ( 8) 00:11:20.718 5999.065 - 6024.271: 0.1516% ( 9) 00:11:20.718 6024.271 - 6049.477: 0.2157% ( 11) 00:11:20.718 6049.477 - 6074.683: 0.2799% ( 11) 00:11:20.718 6074.683 - 6099.889: 0.4081% ( 22) 00:11:20.718 6099.889 - 6125.095: 0.5889% ( 31) 00:11:20.718 6125.095 - 6150.302: 0.8454% ( 44) 00:11:20.718 6150.302 - 6175.508: 1.1019% ( 44) 00:11:20.718 6175.508 - 6200.714: 1.2885% ( 32) 00:11:20.718 6200.714 - 6225.920: 1.5042% ( 37) 00:11:20.718 6225.920 - 6251.126: 1.8890% ( 66) 00:11:20.718 6251.126 - 6276.332: 2.4604% ( 98) 00:11:20.718 6276.332 - 6301.538: 2.8335% ( 64) 00:11:20.718 6301.538 - 6326.745: 3.3232% ( 84) 00:11:20.718 6326.745 - 6351.951: 3.9471% ( 107) 00:11:20.718 6351.951 - 6377.157: 4.7458% ( 137) 00:11:20.718 6377.157 - 6402.363: 5.2355% ( 84) 00:11:20.718 6402.363 - 6427.569: 5.9876% ( 129) 00:11:20.718 6427.569 - 6452.775: 6.6348% ( 111) 00:11:20.718 6452.775 - 6503.188: 8.4597% ( 313) 00:11:20.718 6503.188 - 6553.600: 10.9258% ( 423) 00:11:20.718 6553.600 - 6604.012: 13.3221% ( 411) 00:11:20.718 6604.012 - 6654.425: 16.6919% ( 578) 00:11:20.718 6654.425 - 6704.837: 19.9044% ( 551) 00:11:20.718 6704.837 - 6755.249: 23.8340% ( 674) 00:11:20.718 6755.249 - 6805.662: 29.7750% ( 1019) 00:11:20.718 6805.662 - 6856.074: 35.0921% ( 912) 00:11:20.718 6856.074 - 6906.486: 39.9662% ( 836) 00:11:20.718 6906.486 - 6956.898: 44.9685% ( 858) 00:11:20.718 6956.898 - 7007.311: 49.7435% ( 819) 00:11:20.718 7007.311 - 7057.723: 54.4543% ( 808) 00:11:20.718 7057.723 - 7108.135: 59.5441% ( 873) 00:11:20.718 7108.135 - 7158.548: 62.7157% ( 544) 00:11:20.718 7158.548 - 7208.960: 65.3860% ( 458) 00:11:20.718 7208.960 - 7259.372: 68.2253% ( 487) 00:11:20.718 7259.372 - 7309.785: 69.7994% ( 270) 00:11:20.718 7309.785 - 7360.197: 71.1812% ( 237) 00:11:20.718 7360.197 - 7410.609: 72.4289% ( 214) 00:11:20.718 7410.609 - 7461.022: 73.4200% ( 170) 00:11:20.718 7461.022 - 7511.434: 74.4578% ( 178) 00:11:20.718 7511.434 - 7561.846: 75.4256% ( 166) 00:11:20.718 7561.846 - 7612.258: 76.5858% ( 199) 00:11:20.718 7612.258 - 7662.671: 77.6702% ( 186) 00:11:20.718 7662.671 - 7713.083: 78.9529% ( 220) 00:11:20.718 7713.083 - 7763.495: 80.0956% ( 196) 00:11:20.718 7763.495 - 7813.908: 80.9235% ( 142) 00:11:20.718 7813.908 - 7864.320: 81.9263% ( 172) 00:11:20.718 7864.320 - 7914.732: 83.0340% ( 190) 00:11:20.718 7914.732 - 7965.145: 83.6112% ( 99) 00:11:20.718 7965.145 - 8015.557: 84.2701% ( 113) 00:11:20.718 8015.557 - 8065.969: 84.8647% ( 102) 00:11:20.718 8065.969 - 8116.382: 85.4944% ( 108) 00:11:20.718 8116.382 - 8166.794: 85.8909% ( 68) 00:11:20.718 8166.794 - 8217.206: 86.2290% ( 58) 00:11:20.718 8217.206 - 8267.618: 86.5322% ( 52) 00:11:20.718 8267.618 - 8318.031: 86.8179% ( 49) 00:11:20.718 8318.031 - 8368.443: 87.1094% ( 50) 00:11:20.718 8368.443 - 8418.855: 87.4242% ( 54) 00:11:20.718 8418.855 - 8469.268: 87.6924% ( 46) 00:11:20.718 8469.268 - 8519.680: 88.0131% ( 55) 00:11:20.718 8519.680 - 8570.092: 88.5436% ( 91) 00:11:20.718 8570.092 - 8620.505: 89.0392% ( 85) 00:11:20.718 8620.505 - 8670.917: 89.3832% ( 59) 00:11:20.718 8670.917 - 8721.329: 89.8379% ( 78) 00:11:20.718 8721.329 - 8771.742: 90.2344% ( 68) 00:11:20.718 8771.742 - 8822.154: 90.7299% ( 85) 00:11:20.718 8822.154 - 8872.566: 91.2838% ( 95) 00:11:20.718 8872.566 - 8922.978: 91.6161% ( 57) 00:11:20.718 8922.978 - 8973.391: 91.8902% ( 47) 00:11:20.718 8973.391 - 9023.803: 92.1350% ( 42) 00:11:20.718 9023.803 - 9074.215: 92.2983% ( 28) 00:11:20.718 9074.215 - 9124.628: 92.4499% ( 26) 00:11:20.718 9124.628 - 9175.040: 92.5548% ( 18) 00:11:20.718 9175.040 - 9225.452: 92.6772% ( 21) 00:11:20.718 9225.452 - 9275.865: 92.7647% ( 15) 00:11:20.718 9275.865 - 9326.277: 92.8871% ( 21) 00:11:20.718 9326.277 - 9376.689: 93.0212% ( 23) 00:11:20.718 9376.689 - 9427.102: 93.1728% ( 26) 00:11:20.718 9427.102 - 9477.514: 93.3361% ( 28) 00:11:20.718 9477.514 - 9527.926: 93.5110% ( 30) 00:11:20.718 9527.926 - 9578.338: 93.6392% ( 22) 00:11:20.718 9578.338 - 9628.751: 93.7500% ( 19) 00:11:20.718 9628.751 - 9679.163: 94.0648% ( 54) 00:11:20.718 9679.163 - 9729.575: 94.2106% ( 25) 00:11:20.718 9729.575 - 9779.988: 94.3214% ( 19) 00:11:20.718 9779.988 - 9830.400: 94.5487% ( 39) 00:11:20.718 9830.400 - 9880.812: 94.7178% ( 29) 00:11:20.718 9880.812 - 9931.225: 94.8169% ( 17) 00:11:20.718 9931.225 - 9981.637: 94.9277% ( 19) 00:11:20.718 9981.637 - 10032.049: 95.0152% ( 15) 00:11:20.718 10032.049 - 10082.462: 95.1259% ( 19) 00:11:20.718 10082.462 - 10132.874: 95.2076% ( 14) 00:11:20.718 10132.874 - 10183.286: 95.2659% ( 10) 00:11:20.718 10183.286 - 10233.698: 95.3475% ( 14) 00:11:20.718 10233.698 - 10284.111: 95.4000% ( 9) 00:11:20.718 10284.111 - 10334.523: 95.4291% ( 5) 00:11:20.718 10334.523 - 10384.935: 95.4524% ( 4) 00:11:20.718 10384.935 - 10435.348: 95.4816% ( 5) 00:11:20.718 10435.348 - 10485.760: 95.5166% ( 6) 00:11:20.718 10485.760 - 10536.172: 95.5515% ( 6) 00:11:20.718 10536.172 - 10586.585: 95.5807% ( 5) 00:11:20.718 10586.585 - 10636.997: 95.5865% ( 1) 00:11:20.718 10636.997 - 10687.409: 95.6040% ( 3) 00:11:20.718 10687.409 - 10737.822: 95.6215% ( 3) 00:11:20.718 10737.822 - 10788.234: 95.6332% ( 2) 00:11:20.718 10788.234 - 10838.646: 95.6856% ( 9) 00:11:20.718 10838.646 - 10889.058: 95.7264% ( 7) 00:11:20.718 10889.058 - 10939.471: 95.7731% ( 8) 00:11:20.718 10939.471 - 10989.883: 95.8256% ( 9) 00:11:20.718 10989.883 - 11040.295: 95.8897% ( 11) 00:11:20.718 11040.295 - 11090.708: 95.9771% ( 15) 00:11:20.718 11090.708 - 11141.120: 96.0238% ( 8) 00:11:20.718 11141.120 - 11191.532: 96.0646% ( 7) 00:11:20.718 11191.532 - 11241.945: 96.1404% ( 13) 00:11:20.718 11241.945 - 11292.357: 96.3328% ( 33) 00:11:20.718 11292.357 - 11342.769: 96.3911% ( 10) 00:11:20.718 11342.769 - 11393.182: 96.4319% ( 7) 00:11:20.718 11393.182 - 11443.594: 96.5077% ( 13) 00:11:20.718 11443.594 - 11494.006: 96.6476% ( 24) 00:11:20.718 11494.006 - 11544.418: 96.7875% ( 24) 00:11:20.718 11544.418 - 11594.831: 96.9042% ( 20) 00:11:20.718 11594.831 - 11645.243: 97.0324% ( 22) 00:11:20.718 11645.243 - 11695.655: 97.1607% ( 22) 00:11:20.718 11695.655 - 11746.068: 97.2481% ( 15) 00:11:20.718 11746.068 - 11796.480: 97.3997% ( 26) 00:11:20.718 11796.480 - 11846.892: 97.5630% ( 28) 00:11:20.718 11846.892 - 11897.305: 97.6912% ( 22) 00:11:20.718 11897.305 - 11947.717: 97.8953% ( 35) 00:11:20.718 11947.717 - 11998.129: 98.1110% ( 37) 00:11:20.718 11998.129 - 12048.542: 98.2101% ( 17) 00:11:20.718 12048.542 - 12098.954: 98.3442% ( 23) 00:11:20.718 12098.954 - 12149.366: 98.5366% ( 33) 00:11:20.718 12149.366 - 12199.778: 98.6765% ( 24) 00:11:20.718 12199.778 - 12250.191: 98.7757% ( 17) 00:11:20.718 12250.191 - 12300.603: 98.8631% ( 15) 00:11:20.718 12300.603 - 12351.015: 98.9039% ( 7) 00:11:20.718 12351.015 - 12401.428: 98.9389% ( 6) 00:11:20.718 12401.428 - 12451.840: 98.9622% ( 4) 00:11:20.718 12451.840 - 12502.252: 98.9914% ( 5) 00:11:20.718 12502.252 - 12552.665: 99.0205% ( 5) 00:11:20.718 12552.665 - 12603.077: 99.0380% ( 3) 00:11:20.718 12603.077 - 12653.489: 99.0672% ( 5) 00:11:20.718 12653.489 - 12703.902: 99.0905% ( 4) 00:11:20.718 12703.902 - 12754.314: 99.1138% ( 4) 00:11:20.718 12754.314 - 12804.726: 99.1430% ( 5) 00:11:20.718 12804.726 - 12855.138: 99.1721% ( 5) 00:11:20.718 12855.138 - 12905.551: 99.2013% ( 5) 00:11:20.718 12905.551 - 13006.375: 99.2304% ( 5) 00:11:20.718 13006.375 - 13107.200: 99.2537% ( 4) 00:11:20.718 13712.148 - 13812.972: 99.2712% ( 3) 00:11:20.718 13812.972 - 13913.797: 99.2945% ( 4) 00:11:20.718 13913.797 - 14014.622: 99.3179% ( 4) 00:11:20.718 14014.622 - 14115.446: 99.3412% ( 4) 00:11:20.718 14115.446 - 14216.271: 99.3645% ( 4) 00:11:20.718 14216.271 - 14317.095: 99.3878% ( 4) 00:11:20.718 14317.095 - 14417.920: 99.4111% ( 4) 00:11:20.718 14417.920 - 14518.745: 99.4345% ( 4) 00:11:20.718 14518.745 - 14619.569: 99.4578% ( 4) 00:11:20.718 14619.569 - 14720.394: 99.4869% ( 5) 00:11:20.718 14720.394 - 14821.218: 99.5103% ( 4) 00:11:20.718 14821.218 - 14922.043: 99.5336% ( 4) 00:11:20.718 14922.043 - 15022.868: 99.5569% ( 4) 00:11:20.718 15022.868 - 15123.692: 99.5802% ( 4) 00:11:20.719 15123.692 - 15224.517: 99.6035% ( 4) 00:11:20.719 15224.517 - 15325.342: 99.6269% ( 4) 00:11:20.719 18450.905 - 18551.729: 99.6327% ( 1) 00:11:20.719 18551.729 - 18652.554: 99.6560% ( 4) 00:11:20.719 18652.554 - 18753.378: 99.6793% ( 4) 00:11:20.719 18753.378 - 18854.203: 99.7027% ( 4) 00:11:20.719 18854.203 - 18955.028: 99.7260% ( 4) 00:11:20.719 18955.028 - 19055.852: 99.7493% ( 4) 00:11:20.719 19055.852 - 19156.677: 99.7785% ( 5) 00:11:20.719 19156.677 - 19257.502: 99.8018% ( 4) 00:11:20.719 19257.502 - 19358.326: 99.8193% ( 3) 00:11:20.719 19358.326 - 19459.151: 99.8426% ( 4) 00:11:20.719 19459.151 - 19559.975: 99.8659% ( 4) 00:11:20.719 19559.975 - 19660.800: 99.8892% ( 4) 00:11:20.719 19660.800 - 19761.625: 99.9125% ( 4) 00:11:20.719 19761.625 - 19862.449: 99.9359% ( 4) 00:11:20.719 19862.449 - 19963.274: 99.9592% ( 4) 00:11:20.719 19963.274 - 20064.098: 99.9825% ( 4) 00:11:20.719 20064.098 - 20164.923: 100.0000% ( 3) 00:11:20.719 00:11:20.719 19:29:39 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:11:20.719 00:11:20.719 real 0m2.511s 00:11:20.719 user 0m2.212s 00:11:20.719 sys 0m0.191s 00:11:20.719 19:29:39 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.719 19:29:39 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:11:20.719 ************************************ 00:11:20.719 END TEST nvme_perf 00:11:20.719 ************************************ 00:11:20.719 19:29:39 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:20.719 19:29:39 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:20.719 19:29:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.719 19:29:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.719 ************************************ 00:11:20.719 START TEST nvme_hello_world 00:11:20.719 ************************************ 00:11:20.719 19:29:39 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:11:20.719 Initializing NVMe Controllers 00:11:20.719 Attached to 0000:00:10.0 00:11:20.719 Namespace ID: 1 size: 6GB 00:11:20.719 Attached to 0000:00:11.0 00:11:20.719 Namespace ID: 1 size: 5GB 00:11:20.719 Attached to 0000:00:13.0 00:11:20.719 Namespace ID: 1 size: 1GB 00:11:20.719 Attached to 0000:00:12.0 00:11:20.719 Namespace ID: 1 size: 4GB 00:11:20.719 Namespace ID: 2 size: 4GB 00:11:20.719 Namespace ID: 3 size: 4GB 00:11:20.719 Initialization complete. 00:11:20.719 INFO: using host memory buffer for IO 00:11:20.719 Hello world! 00:11:20.719 INFO: using host memory buffer for IO 00:11:20.719 Hello world! 00:11:20.719 INFO: using host memory buffer for IO 00:11:20.719 Hello world! 00:11:20.719 INFO: using host memory buffer for IO 00:11:20.719 Hello world! 00:11:20.719 INFO: using host memory buffer for IO 00:11:20.719 Hello world! 00:11:20.719 INFO: using host memory buffer for IO 00:11:20.719 Hello world! 00:11:20.719 00:11:20.719 real 0m0.225s 00:11:20.719 user 0m0.080s 00:11:20.719 sys 0m0.104s 00:11:20.719 ************************************ 00:11:20.719 END TEST nvme_hello_world 00:11:20.719 ************************************ 00:11:20.719 19:29:39 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.719 19:29:39 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:11:20.719 19:29:39 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:20.719 19:29:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.719 19:29:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.719 19:29:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.719 ************************************ 00:11:20.719 START TEST nvme_sgl 00:11:20.719 ************************************ 00:11:20.719 19:29:39 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:11:20.977 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:11:20.977 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:11:20.977 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:11:20.977 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:11:20.977 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:11:20.977 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:11:20.977 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:11:20.977 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:11:20.977 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:11:20.977 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:11:20.977 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:11:20.977 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:11:20.977 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:11:20.977 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:11:20.977 NVMe Readv/Writev Request test 00:11:20.977 Attached to 0000:00:10.0 00:11:20.977 Attached to 0000:00:11.0 00:11:20.977 Attached to 0000:00:13.0 00:11:20.977 Attached to 0000:00:12.0 00:11:20.977 0000:00:10.0: build_io_request_2 test passed 00:11:20.977 0000:00:10.0: build_io_request_4 test passed 00:11:20.977 0000:00:10.0: build_io_request_5 test passed 00:11:20.977 0000:00:10.0: build_io_request_6 test passed 00:11:20.977 0000:00:10.0: build_io_request_7 test passed 00:11:20.977 0000:00:10.0: build_io_request_10 test passed 00:11:20.977 0000:00:11.0: build_io_request_2 test passed 00:11:20.977 0000:00:11.0: build_io_request_4 test passed 00:11:20.977 0000:00:11.0: build_io_request_5 test passed 00:11:20.977 0000:00:11.0: build_io_request_6 test passed 00:11:20.977 0000:00:11.0: build_io_request_7 test passed 00:11:20.977 0000:00:11.0: build_io_request_10 test passed 00:11:20.977 Cleaning up... 00:11:20.977 00:11:20.977 real 0m0.283s 00:11:20.977 user 0m0.140s 00:11:20.977 sys 0m0.098s 00:11:20.977 19:29:39 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.977 19:29:39 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:11:20.977 ************************************ 00:11:20.977 END TEST nvme_sgl 00:11:20.977 ************************************ 00:11:20.977 19:29:39 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:20.977 19:29:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:20.977 19:29:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.977 19:29:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:20.977 ************************************ 00:11:20.977 START TEST nvme_e2edp 00:11:20.977 ************************************ 00:11:20.977 19:29:39 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:11:21.235 NVMe Write/Read with End-to-End data protection test 00:11:21.235 Attached to 0000:00:10.0 00:11:21.235 Attached to 0000:00:11.0 00:11:21.235 Attached to 0000:00:13.0 00:11:21.235 Attached to 0000:00:12.0 00:11:21.235 Cleaning up... 00:11:21.235 00:11:21.235 real 0m0.212s 00:11:21.235 user 0m0.079s 00:11:21.235 sys 0m0.089s 00:11:21.235 19:29:40 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.235 19:29:40 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:11:21.235 ************************************ 00:11:21.235 END TEST nvme_e2edp 00:11:21.235 ************************************ 00:11:21.235 19:29:40 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:21.235 19:29:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.235 19:29:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.235 19:29:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:21.235 ************************************ 00:11:21.235 START TEST nvme_reserve 00:11:21.235 ************************************ 00:11:21.235 19:29:40 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:11:21.493 ===================================================== 00:11:21.493 NVMe Controller at PCI bus 0, device 16, function 0 00:11:21.493 ===================================================== 00:11:21.493 Reservations: Not Supported 00:11:21.493 ===================================================== 00:11:21.493 NVMe Controller at PCI bus 0, device 17, function 0 00:11:21.493 ===================================================== 00:11:21.493 Reservations: Not Supported 00:11:21.493 ===================================================== 00:11:21.493 NVMe Controller at PCI bus 0, device 19, function 0 00:11:21.493 ===================================================== 00:11:21.493 Reservations: Not Supported 00:11:21.493 ===================================================== 00:11:21.493 NVMe Controller at PCI bus 0, device 18, function 0 00:11:21.493 ===================================================== 00:11:21.493 Reservations: Not Supported 00:11:21.493 Reservation test passed 00:11:21.493 ************************************ 00:11:21.493 END TEST nvme_reserve 00:11:21.493 ************************************ 00:11:21.493 00:11:21.493 real 0m0.228s 00:11:21.493 user 0m0.076s 00:11:21.493 sys 0m0.090s 00:11:21.493 19:29:40 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.493 19:29:40 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:11:21.493 19:29:40 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:21.493 19:29:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:21.493 19:29:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.493 19:29:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:21.493 ************************************ 00:11:21.493 START TEST nvme_err_injection 00:11:21.493 ************************************ 00:11:21.493 19:29:40 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:11:21.751 NVMe Error Injection test 00:11:21.751 Attached to 0000:00:10.0 00:11:21.751 Attached to 0000:00:11.0 00:11:21.751 Attached to 0000:00:13.0 00:11:21.751 Attached to 0000:00:12.0 00:11:21.751 0000:00:13.0: get features failed as expected 00:11:21.751 0000:00:12.0: get features failed as expected 00:11:21.751 0000:00:10.0: get features failed as expected 00:11:21.751 0000:00:11.0: get features failed as expected 00:11:21.751 0000:00:10.0: get features successfully as expected 00:11:21.751 0000:00:11.0: get features successfully as expected 00:11:21.751 0000:00:13.0: get features successfully as expected 00:11:21.751 0000:00:12.0: get features successfully as expected 00:11:21.751 0000:00:10.0: read failed as expected 00:11:21.751 0000:00:11.0: read failed as expected 00:11:21.751 0000:00:13.0: read failed as expected 00:11:21.751 0000:00:12.0: read failed as expected 00:11:21.751 0000:00:10.0: read successfully as expected 00:11:21.751 0000:00:11.0: read successfully as expected 00:11:21.751 0000:00:13.0: read successfully as expected 00:11:21.751 0000:00:12.0: read successfully as expected 00:11:21.751 Cleaning up... 00:11:21.751 00:11:21.751 real 0m0.231s 00:11:21.751 user 0m0.086s 00:11:21.751 sys 0m0.099s 00:11:21.751 ************************************ 00:11:21.751 END TEST nvme_err_injection 00:11:21.751 ************************************ 00:11:21.751 19:29:40 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:21.751 19:29:40 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:11:21.751 19:29:40 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:21.751 19:29:40 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:11:21.751 19:29:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.751 19:29:40 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:21.751 ************************************ 00:11:21.751 START TEST nvme_overhead 00:11:21.751 ************************************ 00:11:21.751 19:29:40 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:11:23.126 Initializing NVMe Controllers 00:11:23.126 Attached to 0000:00:10.0 00:11:23.126 Attached to 0000:00:11.0 00:11:23.126 Attached to 0000:00:13.0 00:11:23.126 Attached to 0000:00:12.0 00:11:23.126 Initialization complete. Launching workers. 00:11:23.126 submit (in ns) avg, min, max = 11402.7, 9973.1, 272386.2 00:11:23.126 complete (in ns) avg, min, max = 7539.6, 7196.9, 65603.8 00:11:23.126 00:11:23.126 Submit histogram 00:11:23.126 ================ 00:11:23.126 Range in us Cumulative Count 00:11:23.126 9.945 - 9.994: 0.0061% ( 1) 00:11:23.126 10.043 - 10.092: 0.0121% ( 1) 00:11:23.126 10.142 - 10.191: 0.0182% ( 1) 00:11:23.126 10.289 - 10.338: 0.0242% ( 1) 00:11:23.126 10.634 - 10.683: 0.0363% ( 2) 00:11:23.126 10.683 - 10.732: 0.0424% ( 1) 00:11:23.126 10.782 - 10.831: 0.1453% ( 17) 00:11:23.126 10.831 - 10.880: 0.7083% ( 93) 00:11:23.126 10.880 - 10.929: 3.3295% ( 433) 00:11:23.126 10.929 - 10.978: 10.2488% ( 1143) 00:11:23.126 10.978 - 11.028: 21.9444% ( 1932) 00:11:23.126 11.028 - 11.077: 36.6245% ( 2425) 00:11:23.126 11.077 - 11.126: 51.2561% ( 2417) 00:11:23.126 11.126 - 11.175: 62.3948% ( 1840) 00:11:23.126 11.175 - 11.225: 69.9800% ( 1253) 00:11:23.126 11.225 - 11.274: 74.9985% ( 829) 00:11:23.126 11.274 - 11.323: 78.5278% ( 583) 00:11:23.126 11.323 - 11.372: 81.0219% ( 412) 00:11:23.126 11.372 - 11.422: 82.9469% ( 318) 00:11:23.126 11.422 - 11.471: 84.2605% ( 217) 00:11:23.126 11.471 - 11.520: 85.3381% ( 178) 00:11:23.126 11.520 - 11.569: 86.2219% ( 146) 00:11:23.126 11.569 - 11.618: 87.0028% ( 129) 00:11:23.126 11.618 - 11.668: 87.8261% ( 136) 00:11:23.126 11.668 - 11.717: 88.4981% ( 111) 00:11:23.126 11.717 - 11.766: 89.1579% ( 109) 00:11:23.126 11.766 - 11.815: 89.9631% ( 133) 00:11:23.126 11.815 - 11.865: 90.8106% ( 140) 00:11:23.127 11.865 - 11.914: 91.7126% ( 149) 00:11:23.127 11.914 - 11.963: 92.4087% ( 115) 00:11:23.127 11.963 - 12.012: 93.1412% ( 121) 00:11:23.127 12.012 - 12.062: 93.9222% ( 129) 00:11:23.127 12.062 - 12.111: 94.5699% ( 107) 00:11:23.127 12.111 - 12.160: 95.0179% ( 74) 00:11:23.127 12.160 - 12.209: 95.3387% ( 53) 00:11:23.127 12.209 - 12.258: 95.6595% ( 53) 00:11:23.127 12.258 - 12.308: 95.9259% ( 44) 00:11:23.127 12.308 - 12.357: 96.0651% ( 23) 00:11:23.127 12.357 - 12.406: 96.2165% ( 25) 00:11:23.127 12.406 - 12.455: 96.3497% ( 22) 00:11:23.127 12.455 - 12.505: 96.4707% ( 20) 00:11:23.127 12.505 - 12.554: 96.5736% ( 17) 00:11:23.127 12.554 - 12.603: 96.6221% ( 8) 00:11:23.127 12.603 - 12.702: 96.6826% ( 10) 00:11:23.127 12.702 - 12.800: 96.7492% ( 11) 00:11:23.127 12.800 - 12.898: 96.7855% ( 6) 00:11:23.127 12.898 - 12.997: 96.8461% ( 10) 00:11:23.127 12.997 - 13.095: 96.9126% ( 11) 00:11:23.127 13.095 - 13.194: 96.9853% ( 12) 00:11:23.127 13.194 - 13.292: 97.0398% ( 9) 00:11:23.127 13.292 - 13.391: 97.1003% ( 10) 00:11:23.127 13.391 - 13.489: 97.2032% ( 17) 00:11:23.127 13.489 - 13.588: 97.3243% ( 20) 00:11:23.127 13.588 - 13.686: 97.4272% ( 17) 00:11:23.127 13.686 - 13.785: 97.4998% ( 12) 00:11:23.127 13.785 - 13.883: 97.5604% ( 10) 00:11:23.127 13.883 - 13.982: 97.6209% ( 10) 00:11:23.127 13.982 - 14.080: 97.6694% ( 8) 00:11:23.127 14.080 - 14.178: 97.7117% ( 7) 00:11:23.127 14.178 - 14.277: 97.7844% ( 12) 00:11:23.127 14.277 - 14.375: 97.8086% ( 4) 00:11:23.127 14.375 - 14.474: 97.8267% ( 3) 00:11:23.127 14.474 - 14.572: 97.8631% ( 6) 00:11:23.127 14.572 - 14.671: 97.8994% ( 6) 00:11:23.127 14.671 - 14.769: 97.9236% ( 4) 00:11:23.127 14.769 - 14.868: 97.9660% ( 7) 00:11:23.127 14.868 - 14.966: 98.0326% ( 11) 00:11:23.127 14.966 - 15.065: 98.0749% ( 7) 00:11:23.127 15.065 - 15.163: 98.1294% ( 9) 00:11:23.127 15.163 - 15.262: 98.1597% ( 5) 00:11:23.127 15.262 - 15.360: 98.1839% ( 4) 00:11:23.127 15.360 - 15.458: 98.2142% ( 5) 00:11:23.127 15.458 - 15.557: 98.2505% ( 6) 00:11:23.127 15.557 - 15.655: 98.2808% ( 5) 00:11:23.127 15.655 - 15.754: 98.2929% ( 2) 00:11:23.127 15.754 - 15.852: 98.3231% ( 5) 00:11:23.127 15.852 - 15.951: 98.3292% ( 1) 00:11:23.127 15.951 - 16.049: 98.3413% ( 2) 00:11:23.127 16.049 - 16.148: 98.3655% ( 4) 00:11:23.127 16.148 - 16.246: 98.3716% ( 1) 00:11:23.127 16.246 - 16.345: 98.4079% ( 6) 00:11:23.127 16.345 - 16.443: 98.4321% ( 4) 00:11:23.127 16.443 - 16.542: 98.5108% ( 13) 00:11:23.127 16.542 - 16.640: 98.6137% ( 17) 00:11:23.127 16.640 - 16.738: 98.7348% ( 20) 00:11:23.127 16.738 - 16.837: 98.8135% ( 13) 00:11:23.127 16.837 - 16.935: 98.9043% ( 15) 00:11:23.127 16.935 - 17.034: 98.9709% ( 11) 00:11:23.127 17.034 - 17.132: 99.0556% ( 14) 00:11:23.127 17.132 - 17.231: 99.1101% ( 9) 00:11:23.127 17.231 - 17.329: 99.2009% ( 15) 00:11:23.127 17.329 - 17.428: 99.2675% ( 11) 00:11:23.127 17.428 - 17.526: 99.3462% ( 13) 00:11:23.127 17.526 - 17.625: 99.4007% ( 9) 00:11:23.127 17.625 - 17.723: 99.4612% ( 10) 00:11:23.127 17.723 - 17.822: 99.5218% ( 10) 00:11:23.127 17.822 - 17.920: 99.5641% ( 7) 00:11:23.127 17.920 - 18.018: 99.6065% ( 7) 00:11:23.127 18.018 - 18.117: 99.6307% ( 4) 00:11:23.127 18.117 - 18.215: 99.6549% ( 4) 00:11:23.127 18.215 - 18.314: 99.6731% ( 3) 00:11:23.127 18.314 - 18.412: 99.6913% ( 3) 00:11:23.127 18.412 - 18.511: 99.7155% ( 4) 00:11:23.127 18.511 - 18.609: 99.7276% ( 2) 00:11:23.127 18.806 - 18.905: 99.7457% ( 3) 00:11:23.127 19.003 - 19.102: 99.7579% ( 2) 00:11:23.127 19.200 - 19.298: 99.7639% ( 1) 00:11:23.127 19.495 - 19.594: 99.7700% ( 1) 00:11:23.127 20.185 - 20.283: 99.7821% ( 2) 00:11:23.127 20.283 - 20.382: 99.7881% ( 1) 00:11:23.127 20.382 - 20.480: 99.7942% ( 1) 00:11:23.127 20.480 - 20.578: 99.8002% ( 1) 00:11:23.127 20.972 - 21.071: 99.8063% ( 1) 00:11:23.127 21.169 - 21.268: 99.8123% ( 1) 00:11:23.127 21.268 - 21.366: 99.8184% ( 1) 00:11:23.127 21.858 - 21.957: 99.8244% ( 1) 00:11:23.127 22.055 - 22.154: 99.8305% ( 1) 00:11:23.127 22.252 - 22.351: 99.8366% ( 1) 00:11:23.127 22.351 - 22.449: 99.8426% ( 1) 00:11:23.127 22.449 - 22.548: 99.8487% ( 1) 00:11:23.127 22.548 - 22.646: 99.8547% ( 1) 00:11:23.127 22.646 - 22.745: 99.8668% ( 2) 00:11:23.127 22.942 - 23.040: 99.8729% ( 1) 00:11:23.127 23.631 - 23.729: 99.8850% ( 2) 00:11:23.127 25.403 - 25.600: 99.8910% ( 1) 00:11:23.127 25.994 - 26.191: 99.8971% ( 1) 00:11:23.127 26.585 - 26.782: 99.9031% ( 1) 00:11:23.127 26.782 - 26.978: 99.9092% ( 1) 00:11:23.127 27.372 - 27.569: 99.9152% ( 1) 00:11:23.127 28.751 - 28.948: 99.9213% ( 1) 00:11:23.127 29.342 - 29.538: 99.9274% ( 1) 00:11:23.127 29.735 - 29.932: 99.9334% ( 1) 00:11:23.127 31.311 - 31.508: 99.9395% ( 1) 00:11:23.127 33.280 - 33.477: 99.9455% ( 1) 00:11:23.127 37.809 - 38.006: 99.9516% ( 1) 00:11:23.127 45.883 - 46.080: 99.9576% ( 1) 00:11:23.127 48.640 - 48.837: 99.9637% ( 1) 00:11:23.127 49.822 - 50.018: 99.9697% ( 1) 00:11:23.127 56.714 - 57.108: 99.9758% ( 1) 00:11:23.127 59.077 - 59.471: 99.9818% ( 1) 00:11:23.127 95.311 - 95.705: 99.9879% ( 1) 00:11:23.127 99.249 - 99.643: 99.9939% ( 1) 00:11:23.127 270.966 - 272.542: 100.0000% ( 1) 00:11:23.127 00:11:23.127 Complete histogram 00:11:23.127 ================== 00:11:23.127 Range in us Cumulative Count 00:11:23.127 7.188 - 7.237: 0.1513% ( 25) 00:11:23.127 7.237 - 7.286: 4.0136% ( 638) 00:11:23.127 7.286 - 7.335: 25.8006% ( 3599) 00:11:23.127 7.335 - 7.385: 55.3181% ( 4876) 00:11:23.127 7.385 - 7.434: 74.0723% ( 3098) 00:11:23.127 7.434 - 7.483: 84.7751% ( 1768) 00:11:23.127 7.483 - 7.532: 90.4776% ( 942) 00:11:23.127 7.532 - 7.582: 93.0807% ( 430) 00:11:23.127 7.582 - 7.631: 94.4791% ( 231) 00:11:23.127 7.631 - 7.680: 95.3508% ( 144) 00:11:23.127 7.680 - 7.729: 95.7685% ( 69) 00:11:23.127 7.729 - 7.778: 95.9683% ( 33) 00:11:23.127 7.778 - 7.828: 96.0651% ( 16) 00:11:23.127 7.828 - 7.877: 96.1802% ( 19) 00:11:23.127 7.877 - 7.926: 96.2710% ( 15) 00:11:23.127 7.926 - 7.975: 96.3678% ( 16) 00:11:23.127 7.975 - 8.025: 96.4526% ( 14) 00:11:23.127 8.025 - 8.074: 96.5071% ( 9) 00:11:23.127 8.074 - 8.123: 96.5373% ( 5) 00:11:23.127 8.123 - 8.172: 96.6100% ( 12) 00:11:23.127 8.172 - 8.222: 96.7129% ( 17) 00:11:23.127 8.222 - 8.271: 96.8279% ( 19) 00:11:23.127 8.271 - 8.320: 97.0035% ( 29) 00:11:23.127 8.320 - 8.369: 97.1548% ( 25) 00:11:23.127 8.369 - 8.418: 97.3848% ( 38) 00:11:23.127 8.418 - 8.468: 97.5785% ( 32) 00:11:23.127 8.468 - 8.517: 97.7480% ( 28) 00:11:23.127 8.517 - 8.566: 97.7723% ( 4) 00:11:23.127 8.566 - 8.615: 97.8389% ( 11) 00:11:23.127 8.615 - 8.665: 97.8873% ( 8) 00:11:23.127 8.665 - 8.714: 97.9054% ( 3) 00:11:23.127 8.763 - 8.812: 97.9115% ( 1) 00:11:23.127 8.812 - 8.862: 97.9297% ( 3) 00:11:23.127 8.862 - 8.911: 97.9418% ( 2) 00:11:23.127 9.009 - 9.058: 97.9539% ( 2) 00:11:23.127 9.206 - 9.255: 97.9599% ( 1) 00:11:23.127 9.255 - 9.305: 97.9660% ( 1) 00:11:23.127 9.305 - 9.354: 97.9720% ( 1) 00:11:23.127 9.354 - 9.403: 97.9841% ( 2) 00:11:23.127 9.403 - 9.452: 97.9962% ( 2) 00:11:23.127 9.502 - 9.551: 98.0023% ( 1) 00:11:23.127 9.551 - 9.600: 98.0144% ( 2) 00:11:23.127 9.698 - 9.748: 98.0326% ( 3) 00:11:23.127 9.748 - 9.797: 98.0386% ( 1) 00:11:23.127 9.797 - 9.846: 98.0507% ( 2) 00:11:23.127 9.846 - 9.895: 98.0568% ( 1) 00:11:23.127 9.895 - 9.945: 98.0810% ( 4) 00:11:23.127 9.945 - 9.994: 98.0871% ( 1) 00:11:23.127 9.994 - 10.043: 98.0931% ( 1) 00:11:23.127 10.043 - 10.092: 98.1052% ( 2) 00:11:23.127 10.142 - 10.191: 98.1173% ( 2) 00:11:23.127 10.191 - 10.240: 98.1355% ( 3) 00:11:23.127 10.240 - 10.289: 98.1415% ( 1) 00:11:23.127 10.338 - 10.388: 98.1718% ( 5) 00:11:23.127 10.388 - 10.437: 98.1779% ( 1) 00:11:23.127 10.437 - 10.486: 98.1900% ( 2) 00:11:23.127 10.486 - 10.535: 98.2021% ( 2) 00:11:23.127 10.535 - 10.585: 98.2142% ( 2) 00:11:23.127 10.585 - 10.634: 98.2384% ( 4) 00:11:23.127 10.634 - 10.683: 98.2505% ( 2) 00:11:23.127 10.683 - 10.732: 98.2626% ( 2) 00:11:23.127 10.732 - 10.782: 98.2808% ( 3) 00:11:23.127 10.782 - 10.831: 98.2929% ( 2) 00:11:23.127 10.880 - 10.929: 98.3050% ( 2) 00:11:23.127 10.929 - 10.978: 98.3110% ( 1) 00:11:23.127 11.077 - 11.126: 98.3171% ( 1) 00:11:23.127 11.126 - 11.175: 98.3413% ( 4) 00:11:23.127 11.175 - 11.225: 98.3534% ( 2) 00:11:23.127 11.274 - 11.323: 98.3595% ( 1) 00:11:23.128 11.372 - 11.422: 98.3655% ( 1) 00:11:23.128 11.471 - 11.520: 98.3716% ( 1) 00:11:23.128 11.569 - 11.618: 98.3776% ( 1) 00:11:23.128 11.668 - 11.717: 98.3837% ( 1) 00:11:23.128 11.914 - 11.963: 98.3897% ( 1) 00:11:23.128 12.160 - 12.209: 98.3958% ( 1) 00:11:23.128 12.209 - 12.258: 98.4018% ( 1) 00:11:23.128 12.258 - 12.308: 98.4139% ( 2) 00:11:23.128 12.603 - 12.702: 98.4261% ( 2) 00:11:23.128 12.702 - 12.800: 98.4745% ( 8) 00:11:23.128 12.800 - 12.898: 98.5290% ( 9) 00:11:23.128 12.898 - 12.997: 98.5895% ( 10) 00:11:23.128 12.997 - 13.095: 98.6803% ( 15) 00:11:23.128 13.095 - 13.194: 98.7469% ( 11) 00:11:23.128 13.194 - 13.292: 98.8316% ( 14) 00:11:23.128 13.292 - 13.391: 98.9043% ( 12) 00:11:23.128 13.391 - 13.489: 98.9951% ( 15) 00:11:23.128 13.489 - 13.588: 99.1283% ( 22) 00:11:23.128 13.588 - 13.686: 99.2312% ( 17) 00:11:23.128 13.686 - 13.785: 99.3038% ( 12) 00:11:23.128 13.785 - 13.883: 99.3644% ( 10) 00:11:23.128 13.883 - 13.982: 99.4249% ( 10) 00:11:23.128 13.982 - 14.080: 99.4794% ( 9) 00:11:23.128 14.080 - 14.178: 99.5339% ( 9) 00:11:23.128 14.178 - 14.277: 99.5762% ( 7) 00:11:23.128 14.277 - 14.375: 99.6126% ( 6) 00:11:23.128 14.375 - 14.474: 99.6307% ( 3) 00:11:23.128 14.474 - 14.572: 99.6489% ( 3) 00:11:23.128 14.572 - 14.671: 99.6671% ( 3) 00:11:23.128 14.671 - 14.769: 99.6792% ( 2) 00:11:23.128 14.769 - 14.868: 99.6913% ( 2) 00:11:23.128 14.868 - 14.966: 99.6973% ( 1) 00:11:23.128 14.966 - 15.065: 99.7094% ( 2) 00:11:23.128 15.065 - 15.163: 99.7155% ( 1) 00:11:23.128 15.458 - 15.557: 99.7276% ( 2) 00:11:23.128 15.557 - 15.655: 99.7397% ( 2) 00:11:23.128 15.754 - 15.852: 99.7457% ( 1) 00:11:23.128 15.852 - 15.951: 99.7518% ( 1) 00:11:23.128 16.345 - 16.443: 99.7579% ( 1) 00:11:23.128 16.443 - 16.542: 99.7700% ( 2) 00:11:23.128 16.542 - 16.640: 99.7760% ( 1) 00:11:23.128 16.640 - 16.738: 99.7821% ( 1) 00:11:23.128 16.738 - 16.837: 99.7942% ( 2) 00:11:23.128 17.034 - 17.132: 99.8002% ( 1) 00:11:23.128 17.132 - 17.231: 99.8063% ( 1) 00:11:23.128 17.231 - 17.329: 99.8184% ( 2) 00:11:23.128 17.428 - 17.526: 99.8305% ( 2) 00:11:23.128 17.723 - 17.822: 99.8426% ( 2) 00:11:23.128 18.215 - 18.314: 99.8487% ( 1) 00:11:23.128 18.412 - 18.511: 99.8608% ( 2) 00:11:23.128 18.708 - 18.806: 99.8668% ( 1) 00:11:23.128 18.806 - 18.905: 99.8729% ( 1) 00:11:23.128 18.905 - 19.003: 99.8850% ( 2) 00:11:23.128 19.003 - 19.102: 99.8910% ( 1) 00:11:23.128 19.298 - 19.397: 99.8971% ( 1) 00:11:23.128 19.594 - 19.692: 99.9031% ( 1) 00:11:23.128 19.889 - 19.988: 99.9092% ( 1) 00:11:23.128 20.185 - 20.283: 99.9152% ( 1) 00:11:23.128 20.382 - 20.480: 99.9213% ( 1) 00:11:23.128 21.366 - 21.465: 99.9274% ( 1) 00:11:23.128 21.662 - 21.760: 99.9334% ( 1) 00:11:23.128 21.858 - 21.957: 99.9455% ( 2) 00:11:23.128 22.055 - 22.154: 99.9516% ( 1) 00:11:23.128 22.351 - 22.449: 99.9576% ( 1) 00:11:23.128 22.745 - 22.843: 99.9637% ( 1) 00:11:23.128 23.631 - 23.729: 99.9697% ( 1) 00:11:23.128 24.911 - 25.009: 99.9758% ( 1) 00:11:23.128 26.585 - 26.782: 99.9818% ( 1) 00:11:23.128 42.535 - 42.732: 99.9879% ( 1) 00:11:23.128 63.015 - 63.409: 99.9939% ( 1) 00:11:23.128 65.378 - 65.772: 100.0000% ( 1) 00:11:23.128 00:11:23.128 ************************************ 00:11:23.128 END TEST nvme_overhead 00:11:23.128 ************************************ 00:11:23.128 00:11:23.128 real 0m1.234s 00:11:23.128 user 0m1.079s 00:11:23.128 sys 0m0.104s 00:11:23.128 19:29:41 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.128 19:29:41 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:11:23.128 19:29:41 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:23.128 19:29:41 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:11:23.128 19:29:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.128 19:29:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:23.128 ************************************ 00:11:23.128 START TEST nvme_arbitration 00:11:23.128 ************************************ 00:11:23.128 19:29:41 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:11:26.401 Initializing NVMe Controllers 00:11:26.401 Attached to 0000:00:10.0 00:11:26.401 Attached to 0000:00:11.0 00:11:26.401 Attached to 0000:00:13.0 00:11:26.401 Attached to 0000:00:12.0 00:11:26.401 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:11:26.401 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:11:26.401 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:11:26.401 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:11:26.401 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:11:26.401 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:11:26.401 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:11:26.401 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:11:26.401 Initialization complete. Launching workers. 00:11:26.401 Starting thread on core 1 with urgent priority queue 00:11:26.401 Starting thread on core 2 with urgent priority queue 00:11:26.401 Starting thread on core 3 with urgent priority queue 00:11:26.401 Starting thread on core 0 with urgent priority queue 00:11:26.401 QEMU NVMe Ctrl (12340 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:11:26.401 QEMU NVMe Ctrl (12342 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:11:26.401 QEMU NVMe Ctrl (12341 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:11:26.401 QEMU NVMe Ctrl (12342 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:11:26.401 QEMU NVMe Ctrl (12343 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:11:26.401 QEMU NVMe Ctrl (12342 ) core 3: 832.00 IO/s 120.19 secs/100000 ios 00:11:26.401 ======================================================== 00:11:26.401 00:11:26.401 00:11:26.401 real 0m3.314s 00:11:26.401 user 0m9.233s 00:11:26.401 sys 0m0.117s 00:11:26.401 19:29:45 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.401 ************************************ 00:11:26.401 END TEST nvme_arbitration 00:11:26.401 ************************************ 00:11:26.401 19:29:45 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:11:26.401 19:29:45 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:26.401 19:29:45 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:26.401 19:29:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.401 19:29:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:26.401 ************************************ 00:11:26.401 START TEST nvme_single_aen 00:11:26.401 ************************************ 00:11:26.401 19:29:45 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:11:26.660 Asynchronous Event Request test 00:11:26.660 Attached to 0000:00:10.0 00:11:26.660 Attached to 0000:00:11.0 00:11:26.660 Attached to 0000:00:13.0 00:11:26.660 Attached to 0000:00:12.0 00:11:26.660 Reset controller to setup AER completions for this process 00:11:26.660 Registering asynchronous event callbacks... 00:11:26.660 Getting orig temperature thresholds of all controllers 00:11:26.660 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:26.660 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:26.660 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:26.660 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:11:26.660 Setting all controllers temperature threshold low to trigger AER 00:11:26.660 Waiting for all controllers temperature threshold to be set lower 00:11:26.660 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:26.660 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:11:26.660 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:26.660 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:11:26.660 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:26.660 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:11:26.660 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:11:26.660 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:11:26.660 Waiting for all controllers to trigger AER and reset threshold 00:11:26.660 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:26.660 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:26.660 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:26.660 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:11:26.660 Cleaning up... 00:11:26.660 ************************************ 00:11:26.660 END TEST nvme_single_aen 00:11:26.660 ************************************ 00:11:26.660 00:11:26.660 real 0m0.214s 00:11:26.660 user 0m0.078s 00:11:26.660 sys 0m0.096s 00:11:26.660 19:29:45 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.660 19:29:45 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:11:26.660 19:29:45 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:11:26.660 19:29:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:26.660 19:29:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.660 19:29:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:26.660 ************************************ 00:11:26.660 START TEST nvme_doorbell_aers 00:11:26.660 ************************************ 00:11:26.660 19:29:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:11:26.660 19:29:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:11:26.660 19:29:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:11:26.660 19:29:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:11:26.660 19:29:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:11:26.660 19:29:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:26.660 19:29:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:11:26.660 19:29:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:26.660 19:29:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:26.661 19:29:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:26.918 19:29:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:26.918 19:29:45 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:26.918 19:29:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:26.918 19:29:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:26.918 [2024-12-05 19:29:45.891717] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:11:36.953 Executing: test_write_invalid_db 00:11:36.953 Waiting for AER completion... 00:11:36.953 Failure: test_write_invalid_db 00:11:36.953 00:11:36.953 Executing: test_invalid_db_write_overflow_sq 00:11:36.953 Waiting for AER completion... 00:11:36.953 Failure: test_invalid_db_write_overflow_sq 00:11:36.953 00:11:36.953 Executing: test_invalid_db_write_overflow_cq 00:11:36.953 Waiting for AER completion... 00:11:36.953 Failure: test_invalid_db_write_overflow_cq 00:11:36.953 00:11:36.953 19:29:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:36.953 19:29:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:36.953 [2024-12-05 19:29:55.919890] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:11:46.941 Executing: test_write_invalid_db 00:11:46.941 Waiting for AER completion... 00:11:46.941 Failure: test_write_invalid_db 00:11:46.941 00:11:46.941 Executing: test_invalid_db_write_overflow_sq 00:11:46.941 Waiting for AER completion... 00:11:46.941 Failure: test_invalid_db_write_overflow_sq 00:11:46.941 00:11:46.941 Executing: test_invalid_db_write_overflow_cq 00:11:46.941 Waiting for AER completion... 00:11:46.941 Failure: test_invalid_db_write_overflow_cq 00:11:46.941 00:11:46.941 19:30:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:46.941 19:30:05 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:47.199 [2024-12-05 19:30:05.949531] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:11:57.160 Executing: test_write_invalid_db 00:11:57.161 Waiting for AER completion... 00:11:57.161 Failure: test_write_invalid_db 00:11:57.161 00:11:57.161 Executing: test_invalid_db_write_overflow_sq 00:11:57.161 Waiting for AER completion... 00:11:57.161 Failure: test_invalid_db_write_overflow_sq 00:11:57.161 00:11:57.161 Executing: test_invalid_db_write_overflow_cq 00:11:57.161 Waiting for AER completion... 00:11:57.161 Failure: test_invalid_db_write_overflow_cq 00:11:57.161 00:11:57.161 19:30:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:11:57.161 19:30:15 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:57.161 [2024-12-05 19:30:15.993774] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 Executing: test_write_invalid_db 00:12:07.122 Waiting for AER completion... 00:12:07.122 Failure: test_write_invalid_db 00:12:07.122 00:12:07.122 Executing: test_invalid_db_write_overflow_sq 00:12:07.122 Waiting for AER completion... 00:12:07.122 Failure: test_invalid_db_write_overflow_sq 00:12:07.122 00:12:07.122 Executing: test_invalid_db_write_overflow_cq 00:12:07.122 Waiting for AER completion... 00:12:07.122 Failure: test_invalid_db_write_overflow_cq 00:12:07.122 00:12:07.122 00:12:07.122 real 0m40.192s 00:12:07.122 user 0m34.181s 00:12:07.122 sys 0m5.650s 00:12:07.122 ************************************ 00:12:07.122 END TEST nvme_doorbell_aers 00:12:07.122 ************************************ 00:12:07.122 19:30:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.122 19:30:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 19:30:25 nvme -- nvme/nvme.sh@97 -- # uname 00:12:07.122 19:30:25 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:12:07.122 19:30:25 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:07.122 19:30:25 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:12:07.122 19:30:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.122 19:30:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.122 ************************************ 00:12:07.122 START TEST nvme_multi_aen 00:12:07.122 ************************************ 00:12:07.122 19:30:25 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:12:07.122 [2024-12-05 19:30:26.043145] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.043210] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.043222] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.044461] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.044493] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.044503] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.045466] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.045494] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.045503] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.046690] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.046819] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 [2024-12-05 19:30:26.046888] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63448) is not found. Dropping the request. 00:12:07.122 Child process pid: 63974 00:12:07.381 [Child] Asynchronous Event Request test 00:12:07.381 [Child] Attached to 0000:00:10.0 00:12:07.381 [Child] Attached to 0000:00:11.0 00:12:07.381 [Child] Attached to 0000:00:13.0 00:12:07.381 [Child] Attached to 0000:00:12.0 00:12:07.381 [Child] Registering asynchronous event callbacks... 00:12:07.381 [Child] Getting orig temperature thresholds of all controllers 00:12:07.381 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:07.381 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:07.381 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:07.381 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:07.381 [Child] Waiting for all controllers to trigger AER and reset threshold 00:12:07.381 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:07.381 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:07.381 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:07.381 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:07.381 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:07.381 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:07.381 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:07.381 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:07.381 [Child] Cleaning up... 00:12:07.381 Asynchronous Event Request test 00:12:07.381 Attached to 0000:00:10.0 00:12:07.381 Attached to 0000:00:11.0 00:12:07.381 Attached to 0000:00:13.0 00:12:07.381 Attached to 0000:00:12.0 00:12:07.381 Reset controller to setup AER completions for this process 00:12:07.381 Registering asynchronous event callbacks... 00:12:07.381 Getting orig temperature thresholds of all controllers 00:12:07.381 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:07.381 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:07.381 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:07.381 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:12:07.381 Setting all controllers temperature threshold low to trigger AER 00:12:07.381 Waiting for all controllers temperature threshold to be set lower 00:12:07.381 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:07.381 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:12:07.381 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:07.381 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:12:07.381 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:07.381 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:12:07.381 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:12:07.381 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:12:07.381 Waiting for all controllers to trigger AER and reset threshold 00:12:07.381 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:07.381 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:07.381 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:07.381 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:12:07.381 Cleaning up... 00:12:07.381 00:12:07.381 real 0m0.447s 00:12:07.381 user 0m0.147s 00:12:07.381 sys 0m0.200s 00:12:07.381 19:30:26 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.381 19:30:26 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:12:07.381 ************************************ 00:12:07.381 END TEST nvme_multi_aen 00:12:07.381 ************************************ 00:12:07.381 19:30:26 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:07.381 19:30:26 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:07.381 19:30:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.381 19:30:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.381 ************************************ 00:12:07.381 START TEST nvme_startup 00:12:07.381 ************************************ 00:12:07.381 19:30:26 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:12:07.639 Initializing NVMe Controllers 00:12:07.639 Attached to 0000:00:10.0 00:12:07.639 Attached to 0000:00:11.0 00:12:07.639 Attached to 0000:00:13.0 00:12:07.639 Attached to 0000:00:12.0 00:12:07.639 Initialization complete. 00:12:07.639 Time used:154611.125 (us). 00:12:07.639 00:12:07.639 real 0m0.214s 00:12:07.639 user 0m0.076s 00:12:07.639 sys 0m0.092s 00:12:07.639 19:30:26 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.639 19:30:26 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:12:07.639 ************************************ 00:12:07.639 END TEST nvme_startup 00:12:07.639 ************************************ 00:12:07.639 19:30:26 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:12:07.639 19:30:26 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:07.639 19:30:26 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.639 19:30:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:07.639 ************************************ 00:12:07.639 START TEST nvme_multi_secondary 00:12:07.639 ************************************ 00:12:07.639 19:30:26 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:12:07.639 19:30:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64024 00:12:07.639 19:30:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:12:07.639 19:30:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64025 00:12:07.639 19:30:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:12:07.639 19:30:26 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:10.917 Initializing NVMe Controllers 00:12:10.917 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:10.917 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:10.917 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:10.917 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:10.917 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:10.917 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:10.917 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:10.917 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:10.917 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:10.917 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:10.917 Initialization complete. Launching workers. 00:12:10.917 ======================================================== 00:12:10.917 Latency(us) 00:12:10.917 Device Information : IOPS MiB/s Average min max 00:12:10.917 PCIE (0000:00:10.0) NSID 1 from core 2: 3267.05 12.76 4895.36 1260.82 12735.96 00:12:10.917 PCIE (0000:00:11.0) NSID 1 from core 2: 3267.05 12.76 4897.13 1239.86 12413.03 00:12:10.917 PCIE (0000:00:13.0) NSID 1 from core 2: 3267.05 12.76 4890.52 1235.23 12411.46 00:12:10.917 PCIE (0000:00:12.0) NSID 1 from core 2: 3267.05 12.76 4890.49 1215.14 13376.52 00:12:10.917 PCIE (0000:00:12.0) NSID 2 from core 2: 3267.05 12.76 4890.46 1209.66 13257.04 00:12:10.917 PCIE (0000:00:12.0) NSID 3 from core 2: 3267.05 12.76 4890.45 1109.53 12798.54 00:12:10.917 ======================================================== 00:12:10.917 Total : 19602.32 76.57 4892.40 1109.53 13376.52 00:12:10.917 00:12:11.175 19:30:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64024 00:12:11.175 Initializing NVMe Controllers 00:12:11.175 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:11.175 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:11.175 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:11.175 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:11.175 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:11.175 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:11.175 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:11.175 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:11.175 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:11.175 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:11.175 Initialization complete. Launching workers. 00:12:11.175 ======================================================== 00:12:11.175 Latency(us) 00:12:11.175 Device Information : IOPS MiB/s Average min max 00:12:11.175 PCIE (0000:00:10.0) NSID 1 from core 1: 7745.63 30.26 2064.35 901.95 5506.70 00:12:11.175 PCIE (0000:00:11.0) NSID 1 from core 1: 7745.63 30.26 2065.43 895.13 5535.96 00:12:11.175 PCIE (0000:00:13.0) NSID 1 from core 1: 7745.63 30.26 2065.55 975.88 5490.27 00:12:11.175 PCIE (0000:00:12.0) NSID 1 from core 1: 7745.63 30.26 2065.59 864.33 5616.95 00:12:11.175 PCIE (0000:00:12.0) NSID 2 from core 1: 7745.63 30.26 2065.65 916.37 5841.86 00:12:11.175 PCIE (0000:00:12.0) NSID 3 from core 1: 7745.63 30.26 2065.76 1001.38 5403.72 00:12:11.175 ======================================================== 00:12:11.175 Total : 46473.80 181.54 2065.39 864.33 5841.86 00:12:11.175 00:12:13.073 Initializing NVMe Controllers 00:12:13.073 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:13.073 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:13.073 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:13.073 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:13.073 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:13.073 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:13.073 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:13.073 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:13.073 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:13.073 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:13.073 Initialization complete. Launching workers. 00:12:13.073 ======================================================== 00:12:13.073 Latency(us) 00:12:13.073 Device Information : IOPS MiB/s Average min max 00:12:13.073 PCIE (0000:00:10.0) NSID 1 from core 0: 10843.64 42.36 1474.25 667.78 7018.06 00:12:13.073 PCIE (0000:00:11.0) NSID 1 from core 0: 10843.64 42.36 1475.11 690.79 7183.72 00:12:13.073 PCIE (0000:00:13.0) NSID 1 from core 0: 10843.64 42.36 1475.08 675.82 6460.37 00:12:13.073 PCIE (0000:00:12.0) NSID 1 from core 0: 10843.64 42.36 1475.06 638.49 6201.52 00:12:13.073 PCIE (0000:00:12.0) NSID 2 from core 0: 10843.64 42.36 1475.04 618.07 6626.23 00:12:13.073 PCIE (0000:00:12.0) NSID 3 from core 0: 10843.64 42.36 1475.02 593.92 6815.31 00:12:13.073 ======================================================== 00:12:13.073 Total : 65061.86 254.15 1474.93 593.92 7183.72 00:12:13.073 00:12:13.073 19:30:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64025 00:12:13.073 19:30:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64089 00:12:13.073 19:30:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:12:13.073 19:30:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64090 00:12:13.073 19:30:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:12:13.073 19:30:31 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:12:16.351 Initializing NVMe Controllers 00:12:16.351 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:16.351 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:16.351 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:16.351 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:16.351 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:12:16.351 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:12:16.351 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:12:16.351 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:12:16.351 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:12:16.351 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:12:16.351 Initialization complete. Launching workers. 00:12:16.351 ======================================================== 00:12:16.351 Latency(us) 00:12:16.351 Device Information : IOPS MiB/s Average min max 00:12:16.351 PCIE (0000:00:10.0) NSID 1 from core 0: 7765.30 30.33 2059.08 698.52 6884.25 00:12:16.351 PCIE (0000:00:11.0) NSID 1 from core 0: 7765.30 30.33 2060.14 721.27 6708.56 00:12:16.351 PCIE (0000:00:13.0) NSID 1 from core 0: 7765.30 30.33 2060.19 731.81 6866.66 00:12:16.351 PCIE (0000:00:12.0) NSID 1 from core 0: 7765.30 30.33 2060.21 733.13 6436.33 00:12:16.351 PCIE (0000:00:12.0) NSID 2 from core 0: 7765.30 30.33 2060.18 713.22 6075.74 00:12:16.351 PCIE (0000:00:12.0) NSID 3 from core 0: 7765.30 30.33 2060.16 714.54 6373.31 00:12:16.351 ======================================================== 00:12:16.351 Total : 46591.81 182.00 2059.99 698.52 6884.25 00:12:16.351 00:12:16.351 Initializing NVMe Controllers 00:12:16.351 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:16.351 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:16.351 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:16.351 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:16.351 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:12:16.351 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:12:16.351 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:12:16.351 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:12:16.351 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:12:16.351 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:12:16.351 Initialization complete. Launching workers. 00:12:16.351 ======================================================== 00:12:16.351 Latency(us) 00:12:16.351 Device Information : IOPS MiB/s Average min max 00:12:16.351 PCIE (0000:00:10.0) NSID 1 from core 1: 7629.41 29.80 2095.73 693.12 7255.56 00:12:16.351 PCIE (0000:00:11.0) NSID 1 from core 1: 7629.41 29.80 2096.73 710.54 6794.36 00:12:16.351 PCIE (0000:00:13.0) NSID 1 from core 1: 7629.41 29.80 2096.68 704.89 7153.69 00:12:16.351 PCIE (0000:00:12.0) NSID 1 from core 1: 7629.41 29.80 2096.65 708.62 7616.19 00:12:16.351 PCIE (0000:00:12.0) NSID 2 from core 1: 7629.41 29.80 2096.68 709.01 6860.27 00:12:16.351 PCIE (0000:00:12.0) NSID 3 from core 1: 7629.41 29.80 2096.63 721.14 6777.99 00:12:16.351 ======================================================== 00:12:16.351 Total : 45776.47 178.81 2096.52 693.12 7616.19 00:12:16.351 00:12:18.325 Initializing NVMe Controllers 00:12:18.325 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:12:18.325 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:12:18.325 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:12:18.325 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:12:18.325 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:12:18.325 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:12:18.325 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:12:18.325 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:12:18.325 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:12:18.325 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:12:18.325 Initialization complete. Launching workers. 00:12:18.325 ======================================================== 00:12:18.325 Latency(us) 00:12:18.325 Device Information : IOPS MiB/s Average min max 00:12:18.325 PCIE (0000:00:10.0) NSID 1 from core 2: 4669.99 18.24 3423.91 695.47 16519.60 00:12:18.325 PCIE (0000:00:11.0) NSID 1 from core 2: 4669.99 18.24 3425.73 707.63 12548.61 00:12:18.325 PCIE (0000:00:13.0) NSID 1 from core 2: 4669.99 18.24 3425.50 732.51 13552.38 00:12:18.325 PCIE (0000:00:12.0) NSID 1 from core 2: 4669.99 18.24 3425.28 719.02 13459.79 00:12:18.325 PCIE (0000:00:12.0) NSID 2 from core 2: 4669.99 18.24 3425.40 676.86 13485.00 00:12:18.325 PCIE (0000:00:12.0) NSID 3 from core 2: 4669.99 18.24 3425.19 625.73 13360.10 00:12:18.325 ======================================================== 00:12:18.325 Total : 28019.96 109.45 3425.17 625.73 16519.60 00:12:18.325 00:12:18.325 ************************************ 00:12:18.325 END TEST nvme_multi_secondary 00:12:18.325 ************************************ 00:12:18.325 19:30:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64089 00:12:18.325 19:30:37 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64090 00:12:18.325 00:12:18.325 real 0m10.535s 00:12:18.325 user 0m18.429s 00:12:18.325 sys 0m0.647s 00:12:18.325 19:30:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.325 19:30:37 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:12:18.325 19:30:37 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:12:18.325 19:30:37 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:12:18.325 19:30:37 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/63057 ]] 00:12:18.325 19:30:37 nvme -- common/autotest_common.sh@1094 -- # kill 63057 00:12:18.325 19:30:37 nvme -- common/autotest_common.sh@1095 -- # wait 63057 00:12:18.325 [2024-12-05 19:30:37.162291] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.162357] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.162383] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.162398] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.164461] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.164509] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.164526] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.164543] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.166575] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.166623] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.166639] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.166657] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.325 [2024-12-05 19:30:37.168636] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.326 [2024-12-05 19:30:37.168685] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.326 [2024-12-05 19:30:37.168701] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.326 [2024-12-05 19:30:37.168718] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63973) is not found. Dropping the request. 00:12:18.326 [2024-12-05 19:30:37.271442] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:12:18.326 19:30:37 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:12:18.326 19:30:37 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:12:18.326 19:30:37 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:18.326 19:30:37 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:18.326 19:30:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.326 19:30:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:18.326 ************************************ 00:12:18.326 START TEST bdev_nvme_reset_stuck_adm_cmd 00:12:18.326 ************************************ 00:12:18.326 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:12:18.584 * Looking for test storage... 00:12:18.584 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:18.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.584 --rc genhtml_branch_coverage=1 00:12:18.584 --rc genhtml_function_coverage=1 00:12:18.584 --rc genhtml_legend=1 00:12:18.584 --rc geninfo_all_blocks=1 00:12:18.584 --rc geninfo_unexecuted_blocks=1 00:12:18.584 00:12:18.584 ' 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:18.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.584 --rc genhtml_branch_coverage=1 00:12:18.584 --rc genhtml_function_coverage=1 00:12:18.584 --rc genhtml_legend=1 00:12:18.584 --rc geninfo_all_blocks=1 00:12:18.584 --rc geninfo_unexecuted_blocks=1 00:12:18.584 00:12:18.584 ' 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:18.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.584 --rc genhtml_branch_coverage=1 00:12:18.584 --rc genhtml_function_coverage=1 00:12:18.584 --rc genhtml_legend=1 00:12:18.584 --rc geninfo_all_blocks=1 00:12:18.584 --rc geninfo_unexecuted_blocks=1 00:12:18.584 00:12:18.584 ' 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:18.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.584 --rc genhtml_branch_coverage=1 00:12:18.584 --rc genhtml_function_coverage=1 00:12:18.584 --rc genhtml_legend=1 00:12:18.584 --rc geninfo_all_blocks=1 00:12:18.584 --rc geninfo_unexecuted_blocks=1 00:12:18.584 00:12:18.584 ' 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:12:18.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.584 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64257 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64257 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 64257 ']' 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.585 19:30:37 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:18.585 [2024-12-05 19:30:37.572429] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:12:18.585 [2024-12-05 19:30:37.572681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64257 ] 00:12:18.842 [2024-12-05 19:30:37.734170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:18.842 [2024-12-05 19:30:37.835655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:18.842 [2024-12-05 19:30:37.835919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:18.842 [2024-12-05 19:30:37.836119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.842 [2024-12-05 19:30:37.836166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:19.775 nvme0n1 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_KBP5B.txt 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:19.775 true 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733427038 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64280 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:12:19.775 19:30:38 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:21.722 [2024-12-05 19:30:40.523484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:12:21.722 [2024-12-05 19:30:40.523738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:12:21.722 [2024-12-05 19:30:40.523762] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:21.722 [2024-12-05 19:30:40.523775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.722 [2024-12-05 19:30:40.525602] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:12:21.722 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64280 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64280 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64280 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_KBP5B.txt 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:21.722 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_KBP5B.txt 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64257 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 64257 ']' 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 64257 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64257 00:12:21.723 killing process with pid 64257 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64257' 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 64257 00:12:21.723 19:30:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 64257 00:12:23.093 19:30:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:12:23.093 19:30:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:12:23.093 00:12:23.093 real 0m4.578s 00:12:23.093 user 0m16.247s 00:12:23.093 sys 0m0.476s 00:12:23.093 19:30:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.093 ************************************ 00:12:23.093 END TEST bdev_nvme_reset_stuck_adm_cmd 00:12:23.093 19:30:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:12:23.093 ************************************ 00:12:23.093 19:30:41 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:12:23.093 19:30:41 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:12:23.093 19:30:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:23.093 19:30:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.093 19:30:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:23.093 ************************************ 00:12:23.093 START TEST nvme_fio 00:12:23.093 ************************************ 00:12:23.093 19:30:41 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:12:23.093 19:30:41 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:12:23.093 19:30:41 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:12:23.093 19:30:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:12:23.093 19:30:41 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:12:23.093 19:30:41 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:12:23.093 19:30:41 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:12:23.093 19:30:41 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:12:23.093 19:30:41 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:12:23.093 19:30:41 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:12:23.093 19:30:41 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:12:23.093 19:30:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:12:23.093 19:30:41 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:12:23.093 19:30:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:23.093 19:30:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:23.093 19:30:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:23.350 19:30:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:12:23.350 19:30:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:23.607 19:30:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:23.607 19:30:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:23.607 19:30:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:12:23.865 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:23.865 fio-3.35 00:12:23.865 Starting 1 thread 00:12:28.044 00:12:28.044 test: (groupid=0, jobs=1): err= 0: pid=64415: Thu Dec 5 19:30:46 2024 00:12:28.044 read: IOPS=18.1k, BW=70.8MiB/s (74.3MB/s)(142MiB/2008msec) 00:12:28.044 slat (nsec): min=3361, max=67626, avg=5108.12, stdev=2304.61 00:12:28.044 clat (usec): min=777, max=10396, avg=2810.64, stdev=986.72 00:12:28.044 lat (usec): min=782, max=10400, avg=2815.75, stdev=987.69 00:12:28.044 clat percentiles (usec): 00:12:28.044 | 1.00th=[ 1221], 5.00th=[ 1614], 10.00th=[ 1958], 20.00th=[ 2311], 00:12:28.044 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2606], 00:12:28.044 | 70.00th=[ 2769], 80.00th=[ 3195], 90.00th=[ 4080], 95.00th=[ 5014], 00:12:28.044 | 99.00th=[ 6325], 99.50th=[ 6849], 99.90th=[ 8029], 99.95th=[ 8717], 00:12:28.044 | 99.99th=[ 9372] 00:12:28.044 bw ( KiB/s): min=45984, max=97112, per=100.00%, avg=72734.00, stdev=21582.98, samples=4 00:12:28.044 iops : min=11496, max=24278, avg=18183.50, stdev=5395.74, samples=4 00:12:28.044 write: IOPS=18.1k, BW=70.9MiB/s (74.3MB/s)(142MiB/2008msec); 0 zone resets 00:12:28.044 slat (nsec): min=3431, max=47760, avg=5399.43, stdev=2350.81 00:12:28.044 clat (usec): min=855, max=24330, avg=4220.88, stdev=3579.43 00:12:28.044 lat (usec): min=859, max=24334, avg=4226.28, stdev=3579.66 00:12:28.044 clat percentiles (usec): 00:12:28.044 | 1.00th=[ 1450], 5.00th=[ 2008], 10.00th=[ 2245], 20.00th=[ 2409], 00:12:28.044 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2606], 60.00th=[ 2769], 00:12:28.044 | 70.00th=[ 3392], 80.00th=[ 4883], 90.00th=[10552], 95.00th=[12911], 00:12:28.044 | 99.00th=[17171], 99.50th=[18744], 99.90th=[22414], 99.95th=[23200], 00:12:28.044 | 99.99th=[23987] 00:12:28.044 bw ( KiB/s): min=45800, max=96712, per=100.00%, avg=72676.00, stdev=21459.71, samples=4 00:12:28.044 iops : min=11450, max=24178, avg=18169.00, stdev=5364.93, samples=4 00:12:28.044 lat (usec) : 1000=0.08% 00:12:28.044 lat (msec) : 2=7.76%, 4=74.57%, 10=11.98%, 20=5.47%, 50=0.14% 00:12:28.044 cpu : usr=99.30%, sys=0.00%, ctx=13, majf=0, minf=609 00:12:28.044 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:28.044 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:28.044 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:28.044 issued rwts: total=36402,36430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:28.044 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:28.044 00:12:28.044 Run status group 0 (all jobs): 00:12:28.044 READ: bw=70.8MiB/s (74.3MB/s), 70.8MiB/s-70.8MiB/s (74.3MB/s-74.3MB/s), io=142MiB (149MB), run=2008-2008msec 00:12:28.044 WRITE: bw=70.9MiB/s (74.3MB/s), 70.9MiB/s-70.9MiB/s (74.3MB/s-74.3MB/s), io=142MiB (149MB), run=2008-2008msec 00:12:28.044 ----------------------------------------------------- 00:12:28.044 Suppressions used: 00:12:28.044 count bytes template 00:12:28.044 1 32 /usr/src/fio/parse.c 00:12:28.044 1 8 libtcmalloc_minimal.so 00:12:28.044 ----------------------------------------------------- 00:12:28.044 00:12:28.044 19:30:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:28.044 19:30:46 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:28.044 19:30:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:28.044 19:30:46 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:28.044 19:30:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:12:28.044 19:30:47 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:28.303 19:30:47 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:28.303 19:30:47 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:28.303 19:30:47 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:12:28.561 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:28.561 fio-3.35 00:12:28.561 Starting 1 thread 00:12:35.120 00:12:35.120 test: (groupid=0, jobs=1): err= 0: pid=64475: Thu Dec 5 19:30:53 2024 00:12:35.120 read: IOPS=23.7k, BW=92.7MiB/s (97.2MB/s)(185MiB/2001msec) 00:12:35.120 slat (nsec): min=4220, max=58911, avg=4906.24, stdev=1704.67 00:12:35.120 clat (usec): min=224, max=8676, avg=2695.24, stdev=603.29 00:12:35.120 lat (usec): min=228, max=8680, avg=2700.15, stdev=604.23 00:12:35.120 clat percentiles (usec): 00:12:35.120 | 1.00th=[ 1844], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2474], 00:12:35.120 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2540], 60.00th=[ 2573], 00:12:35.120 | 70.00th=[ 2638], 80.00th=[ 2704], 90.00th=[ 2999], 95.00th=[ 3916], 00:12:35.120 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 7046], 99.95th=[ 7898], 00:12:35.120 | 99.99th=[ 8356] 00:12:35.120 bw ( KiB/s): min=92216, max=95248, per=98.94%, avg=93909.33, stdev=1546.80, samples=3 00:12:35.120 iops : min=23054, max=23812, avg=23477.33, stdev=386.70, samples=3 00:12:35.120 write: IOPS=23.6k, BW=92.1MiB/s (96.6MB/s)(184MiB/2001msec); 0 zone resets 00:12:35.120 slat (nsec): min=4337, max=52856, avg=5201.50, stdev=1758.33 00:12:35.120 clat (usec): min=200, max=8404, avg=2696.79, stdev=605.36 00:12:35.120 lat (usec): min=205, max=8409, avg=2701.99, stdev=606.32 00:12:35.120 clat percentiles (usec): 00:12:35.120 | 1.00th=[ 1795], 5.00th=[ 2343], 10.00th=[ 2409], 20.00th=[ 2474], 00:12:35.120 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2540], 60.00th=[ 2573], 00:12:35.120 | 70.00th=[ 2638], 80.00th=[ 2737], 90.00th=[ 2999], 95.00th=[ 3916], 00:12:35.120 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 6980], 99.95th=[ 7701], 00:12:35.120 | 99.99th=[ 8029] 00:12:35.120 bw ( KiB/s): min=92072, max=95896, per=99.64%, avg=93992.00, stdev=1912.05, samples=3 00:12:35.120 iops : min=23018, max=23974, avg=23498.00, stdev=478.01, samples=3 00:12:35.120 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:12:35.120 lat (msec) : 2=1.80%, 4=93.52%, 10=4.62% 00:12:35.120 cpu : usr=99.35%, sys=0.00%, ctx=3, majf=0, minf=609 00:12:35.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:35.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:35.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:35.120 issued rwts: total=47481,47190,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:35.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:35.120 00:12:35.120 Run status group 0 (all jobs): 00:12:35.120 READ: bw=92.7MiB/s (97.2MB/s), 92.7MiB/s-92.7MiB/s (97.2MB/s-97.2MB/s), io=185MiB (194MB), run=2001-2001msec 00:12:35.120 WRITE: bw=92.1MiB/s (96.6MB/s), 92.1MiB/s-92.1MiB/s (96.6MB/s-96.6MB/s), io=184MiB (193MB), run=2001-2001msec 00:12:35.120 ----------------------------------------------------- 00:12:35.120 Suppressions used: 00:12:35.120 count bytes template 00:12:35.120 1 32 /usr/src/fio/parse.c 00:12:35.120 1 8 libtcmalloc_minimal.so 00:12:35.120 ----------------------------------------------------- 00:12:35.120 00:12:35.120 19:30:53 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:35.120 19:30:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:35.120 19:30:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:35.120 19:30:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:35.120 19:30:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:12:35.120 19:30:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:35.377 19:30:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:35.377 19:30:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:35.377 19:30:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:12:35.635 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:35.635 fio-3.35 00:12:35.635 Starting 1 thread 00:12:42.195 00:12:42.196 test: (groupid=0, jobs=1): err= 0: pid=64532: Thu Dec 5 19:31:01 2024 00:12:42.196 read: IOPS=23.1k, BW=90.1MiB/s (94.5MB/s)(180MiB/2001msec) 00:12:42.196 slat (nsec): min=3373, max=73371, avg=5089.67, stdev=2517.16 00:12:42.196 clat (usec): min=206, max=8866, avg=2775.52, stdev=949.07 00:12:42.196 lat (usec): min=210, max=8879, avg=2780.61, stdev=950.70 00:12:42.196 clat percentiles (usec): 00:12:42.196 | 1.00th=[ 1385], 5.00th=[ 1958], 10.00th=[ 2212], 20.00th=[ 2376], 00:12:42.196 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:12:42.196 | 70.00th=[ 2638], 80.00th=[ 2769], 90.00th=[ 3818], 95.00th=[ 5014], 00:12:42.196 | 99.00th=[ 6456], 99.50th=[ 6980], 99.90th=[ 8586], 99.95th=[ 8717], 00:12:42.196 | 99.99th=[ 8848] 00:12:42.196 bw ( KiB/s): min=87648, max=95984, per=99.14%, avg=91466.67, stdev=4211.69, samples=3 00:12:42.196 iops : min=21912, max=23996, avg=22866.67, stdev=1052.92, samples=3 00:12:42.196 write: IOPS=22.9k, BW=89.6MiB/s (93.9MB/s)(179MiB/2001msec); 0 zone resets 00:12:42.196 slat (nsec): min=3463, max=80521, avg=5355.90, stdev=2489.18 00:12:42.196 clat (usec): min=225, max=9074, avg=2766.60, stdev=931.72 00:12:42.196 lat (usec): min=230, max=9088, avg=2771.95, stdev=933.35 00:12:42.196 clat percentiles (usec): 00:12:42.196 | 1.00th=[ 1385], 5.00th=[ 1942], 10.00th=[ 2212], 20.00th=[ 2376], 00:12:42.196 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:12:42.196 | 70.00th=[ 2638], 80.00th=[ 2769], 90.00th=[ 3752], 95.00th=[ 4948], 00:12:42.196 | 99.00th=[ 6390], 99.50th=[ 6980], 99.90th=[ 8717], 99.95th=[ 8717], 00:12:42.196 | 99.99th=[ 8848] 00:12:42.196 bw ( KiB/s): min=89360, max=95728, per=99.83%, avg=91586.67, stdev=3589.89, samples=3 00:12:42.196 iops : min=22340, max=23932, avg=22896.67, stdev=897.47, samples=3 00:12:42.196 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.13% 00:12:42.196 lat (msec) : 2=5.53%, 4=85.37%, 10=8.92% 00:12:42.196 cpu : usr=99.20%, sys=0.05%, ctx=9, majf=0, minf=608 00:12:42.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:42.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:42.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:42.196 issued rwts: total=46151,45892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:42.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:42.196 00:12:42.196 Run status group 0 (all jobs): 00:12:42.196 READ: bw=90.1MiB/s (94.5MB/s), 90.1MiB/s-90.1MiB/s (94.5MB/s-94.5MB/s), io=180MiB (189MB), run=2001-2001msec 00:12:42.196 WRITE: bw=89.6MiB/s (93.9MB/s), 89.6MiB/s-89.6MiB/s (93.9MB/s-93.9MB/s), io=179MiB (188MB), run=2001-2001msec 00:12:42.455 ----------------------------------------------------- 00:12:42.455 Suppressions used: 00:12:42.455 count bytes template 00:12:42.455 1 32 /usr/src/fio/parse.c 00:12:42.455 1 8 libtcmalloc_minimal.so 00:12:42.455 ----------------------------------------------------- 00:12:42.455 00:12:42.455 19:31:01 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:42.455 19:31:01 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:12:42.455 19:31:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:42.455 19:31:01 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:12:42.455 19:31:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:12:42.455 19:31:01 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:12:42.713 19:31:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:12:42.713 19:31:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:42.713 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:12:42.714 19:31:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:12:42.972 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:12:42.972 fio-3.35 00:12:42.972 Starting 1 thread 00:12:52.943 00:12:52.943 test: (groupid=0, jobs=1): err= 0: pid=64593: Thu Dec 5 19:31:11 2024 00:12:52.943 read: IOPS=24.8k, BW=96.8MiB/s (101MB/s)(194MiB/2001msec) 00:12:52.943 slat (nsec): min=3366, max=54846, avg=4801.21, stdev=1864.11 00:12:52.943 clat (usec): min=558, max=7281, avg=2579.87, stdev=677.91 00:12:52.943 lat (usec): min=561, max=7292, avg=2584.67, stdev=679.05 00:12:52.943 clat percentiles (usec): 00:12:52.943 | 1.00th=[ 1614], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2343], 00:12:52.943 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:12:52.943 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2802], 95.00th=[ 4047], 00:12:52.943 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 6783], 99.95th=[ 6915], 00:12:52.943 | 99.99th=[ 7242] 00:12:52.943 bw ( KiB/s): min=94552, max=100832, per=99.11%, avg=98229.33, stdev=3275.02, samples=3 00:12:52.943 iops : min=23638, max=25208, avg=24557.33, stdev=818.76, samples=3 00:12:52.943 write: IOPS=24.6k, BW=96.2MiB/s (101MB/s)(193MiB/2001msec); 0 zone resets 00:12:52.943 slat (nsec): min=3506, max=69859, avg=5123.00, stdev=1934.02 00:12:52.943 clat (usec): min=569, max=7350, avg=2581.68, stdev=679.54 00:12:52.943 lat (usec): min=573, max=7355, avg=2586.81, stdev=680.70 00:12:52.943 clat percentiles (usec): 00:12:52.943 | 1.00th=[ 1598], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2343], 00:12:52.943 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2442], 60.00th=[ 2474], 00:12:52.943 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 2802], 95.00th=[ 3982], 00:12:52.943 | 99.00th=[ 6194], 99.50th=[ 6390], 99.90th=[ 6849], 99.95th=[ 7111], 00:12:52.943 | 99.99th=[ 7242] 00:12:52.943 bw ( KiB/s): min=94424, max=101680, per=99.73%, avg=98248.00, stdev=3643.85, samples=3 00:12:52.943 iops : min=23606, max=25420, avg=24562.00, stdev=910.96, samples=3 00:12:52.943 lat (usec) : 750=0.01%, 1000=0.05% 00:12:52.943 lat (msec) : 2=3.33%, 4=91.58%, 10=5.03% 00:12:52.943 cpu : usr=99.35%, sys=0.00%, ctx=5, majf=0, minf=606 00:12:52.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:12:52.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:52.943 issued rwts: total=49578,49281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:52.943 00:12:52.943 Run status group 0 (all jobs): 00:12:52.943 READ: bw=96.8MiB/s (101MB/s), 96.8MiB/s-96.8MiB/s (101MB/s-101MB/s), io=194MiB (203MB), run=2001-2001msec 00:12:52.943 WRITE: bw=96.2MiB/s (101MB/s), 96.2MiB/s-96.2MiB/s (101MB/s-101MB/s), io=193MiB (202MB), run=2001-2001msec 00:12:52.943 ----------------------------------------------------- 00:12:52.943 Suppressions used: 00:12:52.943 count bytes template 00:12:52.943 1 32 /usr/src/fio/parse.c 00:12:52.943 1 8 libtcmalloc_minimal.so 00:12:52.943 ----------------------------------------------------- 00:12:52.943 00:12:52.943 19:31:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:12:52.943 19:31:11 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:12:52.943 00:12:52.943 real 0m29.529s 00:12:52.943 user 0m16.257s 00:12:52.943 sys 0m24.437s 00:12:52.943 ************************************ 00:12:52.943 END TEST nvme_fio 00:12:52.943 ************************************ 00:12:52.943 19:31:11 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.943 19:31:11 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:12:52.943 ************************************ 00:12:52.943 END TEST nvme 00:12:52.943 ************************************ 00:12:52.943 00:12:52.943 real 1m38.338s 00:12:52.943 user 3m36.318s 00:12:52.943 sys 0m34.813s 00:12:52.943 19:31:11 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.943 19:31:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:12:52.943 19:31:11 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:12:52.943 19:31:11 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:52.943 19:31:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:52.943 19:31:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.943 19:31:11 -- common/autotest_common.sh@10 -- # set +x 00:12:52.943 ************************************ 00:12:52.943 START TEST nvme_scc 00:12:52.943 ************************************ 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:12:52.943 * Looking for test storage... 00:12:52.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@345 -- # : 1 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:52.943 19:31:11 nvme_scc -- scripts/common.sh@368 -- # return 0 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.943 --rc genhtml_branch_coverage=1 00:12:52.943 --rc genhtml_function_coverage=1 00:12:52.943 --rc genhtml_legend=1 00:12:52.943 --rc geninfo_all_blocks=1 00:12:52.943 --rc geninfo_unexecuted_blocks=1 00:12:52.943 00:12:52.943 ' 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.943 --rc genhtml_branch_coverage=1 00:12:52.943 --rc genhtml_function_coverage=1 00:12:52.943 --rc genhtml_legend=1 00:12:52.943 --rc geninfo_all_blocks=1 00:12:52.943 --rc geninfo_unexecuted_blocks=1 00:12:52.943 00:12:52.943 ' 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.943 --rc genhtml_branch_coverage=1 00:12:52.943 --rc genhtml_function_coverage=1 00:12:52.943 --rc genhtml_legend=1 00:12:52.943 --rc geninfo_all_blocks=1 00:12:52.943 --rc geninfo_unexecuted_blocks=1 00:12:52.943 00:12:52.943 ' 00:12:52.943 19:31:11 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:52.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:52.943 --rc genhtml_branch_coverage=1 00:12:52.943 --rc genhtml_function_coverage=1 00:12:52.943 --rc genhtml_legend=1 00:12:52.943 --rc geninfo_all_blocks=1 00:12:52.943 --rc geninfo_unexecuted_blocks=1 00:12:52.943 00:12:52.943 ' 00:12:52.943 19:31:11 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:52.943 19:31:11 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:52.944 19:31:11 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:52.944 19:31:11 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.944 19:31:11 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.944 19:31:11 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.944 19:31:11 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.944 19:31:11 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.944 19:31:11 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.944 19:31:11 nvme_scc -- paths/export.sh@5 -- # export PATH 00:12:52.944 19:31:11 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:12:52.944 19:31:11 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:12:52.944 19:31:11 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:52.944 19:31:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:12:52.944 19:31:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:12:52.944 19:31:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:12:52.944 19:31:11 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:52.944 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:53.202 Waiting for block devices as requested 00:12:53.202 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.202 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.202 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:12:53.460 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:58.732 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:58.732 19:31:17 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:12:58.732 19:31:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:58.732 19:31:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:12:58.732 19:31:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:58.732 19:31:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.732 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.733 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.734 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.735 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.736 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.737 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.738 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:12:58.739 19:31:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:58.739 19:31:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:12:58.739 19:31:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:58.739 19:31:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:12:58.739 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:12:58.740 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.741 19:31:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.742 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:12:58.743 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.744 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:12:58.745 19:31:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:58.745 19:31:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:12:58.745 19:31:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:58.745 19:31:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.745 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.746 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:12:58.747 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:12:58.748 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:12:58.749 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.750 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:12:58.751 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:12:58.752 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.753 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.754 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.755 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.756 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:12:58.757 19:31:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:12:58.757 19:31:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:12:58.757 19:31:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:12:58.757 19:31:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.757 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.758 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.759 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:12:58.760 19:31:17 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:12:58.760 19:31:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:12:58.761 19:31:17 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:12:58.761 19:31:17 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:12:58.761 19:31:17 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:12:58.761 19:31:17 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:59.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:59.587 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:59.587 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:59.587 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:59.859 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:59.859 19:31:18 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:12:59.859 19:31:18 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:59.859 19:31:18 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.859 19:31:18 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:12:59.859 ************************************ 00:12:59.859 START TEST nvme_simple_copy 00:12:59.859 ************************************ 00:12:59.859 19:31:18 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:13:00.117 Initializing NVMe Controllers 00:13:00.117 Attaching to 0000:00:10.0 00:13:00.117 Controller supports SCC. Attached to 0000:00:10.0 00:13:00.117 Namespace ID: 1 size: 6GB 00:13:00.117 Initialization complete. 00:13:00.117 00:13:00.117 Controller QEMU NVMe Ctrl (12340 ) 00:13:00.117 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:13:00.117 Namespace Block Size:4096 00:13:00.117 Writing LBAs 0 to 63 with Random Data 00:13:00.117 Copied LBAs from 0 - 63 to the Destination LBA 256 00:13:00.117 LBAs matching Written Data: 64 00:13:00.117 00:13:00.117 real 0m0.256s 00:13:00.117 user 0m0.096s 00:13:00.117 sys 0m0.059s 00:13:00.117 19:31:18 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.117 ************************************ 00:13:00.117 19:31:18 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:13:00.117 END TEST nvme_simple_copy 00:13:00.117 ************************************ 00:13:00.117 00:13:00.117 real 0m7.448s 00:13:00.117 user 0m0.991s 00:13:00.117 sys 0m1.275s 00:13:00.117 19:31:18 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.117 19:31:18 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:13:00.117 ************************************ 00:13:00.117 END TEST nvme_scc 00:13:00.117 ************************************ 00:13:00.117 19:31:18 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:13:00.117 19:31:18 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:13:00.117 19:31:18 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:13:00.117 19:31:18 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:13:00.117 19:31:18 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:13:00.117 19:31:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:00.117 19:31:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.117 19:31:18 -- common/autotest_common.sh@10 -- # set +x 00:13:00.117 ************************************ 00:13:00.117 START TEST nvme_fdp 00:13:00.117 ************************************ 00:13:00.117 19:31:18 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:13:00.117 * Looking for test storage... 00:13:00.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:00.117 19:31:19 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:00.117 19:31:19 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:13:00.117 19:31:19 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:00.117 19:31:19 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:13:00.117 19:31:19 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:13:00.376 19:31:19 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.376 19:31:19 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:00.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.376 --rc genhtml_branch_coverage=1 00:13:00.376 --rc genhtml_function_coverage=1 00:13:00.376 --rc genhtml_legend=1 00:13:00.376 --rc geninfo_all_blocks=1 00:13:00.376 --rc geninfo_unexecuted_blocks=1 00:13:00.376 00:13:00.376 ' 00:13:00.376 19:31:19 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:00.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.376 --rc genhtml_branch_coverage=1 00:13:00.376 --rc genhtml_function_coverage=1 00:13:00.376 --rc genhtml_legend=1 00:13:00.376 --rc geninfo_all_blocks=1 00:13:00.376 --rc geninfo_unexecuted_blocks=1 00:13:00.376 00:13:00.376 ' 00:13:00.376 19:31:19 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:00.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.376 --rc genhtml_branch_coverage=1 00:13:00.376 --rc genhtml_function_coverage=1 00:13:00.376 --rc genhtml_legend=1 00:13:00.376 --rc geninfo_all_blocks=1 00:13:00.376 --rc geninfo_unexecuted_blocks=1 00:13:00.376 00:13:00.376 ' 00:13:00.376 19:31:19 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:00.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.376 --rc genhtml_branch_coverage=1 00:13:00.376 --rc genhtml_function_coverage=1 00:13:00.376 --rc genhtml_legend=1 00:13:00.376 --rc geninfo_all_blocks=1 00:13:00.376 --rc geninfo_unexecuted_blocks=1 00:13:00.376 00:13:00.376 ' 00:13:00.376 19:31:19 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:00.376 19:31:19 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:13:00.376 19:31:19 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:13:00.376 19:31:19 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:13:00.376 19:31:19 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.376 19:31:19 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.376 19:31:19 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.376 19:31:19 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.376 19:31:19 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.376 19:31:19 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:13:00.376 19:31:19 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.376 19:31:19 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:13:00.376 19:31:19 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:13:00.376 19:31:19 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:13:00.377 19:31:19 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:13:00.377 19:31:19 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:13:00.377 19:31:19 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:13:00.377 19:31:19 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:13:00.377 19:31:19 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:13:00.377 19:31:19 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:13:00.377 19:31:19 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:00.377 19:31:19 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:00.635 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:00.635 Waiting for block devices as requested 00:13:00.635 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:00.893 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:00.893 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:00.893 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:06.178 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:06.178 19:31:24 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:13:06.178 19:31:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:06.178 19:31:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:06.178 19:31:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:06.178 19:31:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:13:06.178 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.179 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:13:06.180 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.181 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:13:06.182 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.183 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.184 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:13:06.185 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.186 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:13:06.187 19:31:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:06.187 19:31:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:06.187 19:31:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:06.187 19:31:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:13:06.187 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.188 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.189 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.190 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.191 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.192 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:13:06.193 19:31:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:06.193 19:31:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:06.193 19:31:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:06.193 19:31:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:06.193 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.194 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:13:06.195 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.196 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:13:06.197 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.198 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.199 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.200 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.201 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.202 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.203 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.204 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.205 19:31:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.206 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:13:06.207 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:13:06.467 19:31:25 nvme_fdp -- scripts/common.sh@18 -- # local i 00:13:06.467 19:31:25 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:06.467 19:31:25 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:06.467 19:31:25 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.467 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.468 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.469 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:13:06.470 19:31:25 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:06.470 19:31:25 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:13:06.471 19:31:25 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:13:06.471 19:31:25 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:13:06.471 19:31:25 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:13:06.471 19:31:25 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:06.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:07.294 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:07.294 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:07.294 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:07.294 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:07.294 19:31:26 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:07.294 19:31:26 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:07.294 19:31:26 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.294 19:31:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:07.294 ************************************ 00:13:07.294 START TEST nvme_flexible_data_placement 00:13:07.294 ************************************ 00:13:07.294 19:31:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:13:07.552 Initializing NVMe Controllers 00:13:07.552 Attaching to 0000:00:13.0 00:13:07.552 Controller supports FDP Attached to 0000:00:13.0 00:13:07.552 Namespace ID: 1 Endurance Group ID: 1 00:13:07.552 Initialization complete. 00:13:07.552 00:13:07.553 ================================== 00:13:07.553 == FDP tests for Namespace: #01 == 00:13:07.553 ================================== 00:13:07.553 00:13:07.553 Get Feature: FDP: 00:13:07.553 ================= 00:13:07.553 Enabled: Yes 00:13:07.553 FDP configuration Index: 0 00:13:07.553 00:13:07.553 FDP configurations log page 00:13:07.553 =========================== 00:13:07.553 Number of FDP configurations: 1 00:13:07.553 Version: 0 00:13:07.553 Size: 112 00:13:07.553 FDP Configuration Descriptor: 0 00:13:07.553 Descriptor Size: 96 00:13:07.553 Reclaim Group Identifier format: 2 00:13:07.553 FDP Volatile Write Cache: Not Present 00:13:07.553 FDP Configuration: Valid 00:13:07.553 Vendor Specific Size: 0 00:13:07.553 Number of Reclaim Groups: 2 00:13:07.553 Number of Recalim Unit Handles: 8 00:13:07.553 Max Placement Identifiers: 128 00:13:07.553 Number of Namespaces Suppprted: 256 00:13:07.553 Reclaim unit Nominal Size: 6000000 bytes 00:13:07.553 Estimated Reclaim Unit Time Limit: Not Reported 00:13:07.553 RUH Desc #000: RUH Type: Initially Isolated 00:13:07.553 RUH Desc #001: RUH Type: Initially Isolated 00:13:07.553 RUH Desc #002: RUH Type: Initially Isolated 00:13:07.553 RUH Desc #003: RUH Type: Initially Isolated 00:13:07.553 RUH Desc #004: RUH Type: Initially Isolated 00:13:07.553 RUH Desc #005: RUH Type: Initially Isolated 00:13:07.553 RUH Desc #006: RUH Type: Initially Isolated 00:13:07.553 RUH Desc #007: RUH Type: Initially Isolated 00:13:07.553 00:13:07.553 FDP reclaim unit handle usage log page 00:13:07.553 ====================================== 00:13:07.553 Number of Reclaim Unit Handles: 8 00:13:07.553 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:13:07.553 RUH Usage Desc #001: RUH Attributes: Unused 00:13:07.553 RUH Usage Desc #002: RUH Attributes: Unused 00:13:07.553 RUH Usage Desc #003: RUH Attributes: Unused 00:13:07.553 RUH Usage Desc #004: RUH Attributes: Unused 00:13:07.553 RUH Usage Desc #005: RUH Attributes: Unused 00:13:07.553 RUH Usage Desc #006: RUH Attributes: Unused 00:13:07.553 RUH Usage Desc #007: RUH Attributes: Unused 00:13:07.553 00:13:07.553 FDP statistics log page 00:13:07.553 ======================= 00:13:07.553 Host bytes with metadata written: 1117700096 00:13:07.553 Media bytes with metadata written: 1117945856 00:13:07.553 Media bytes erased: 0 00:13:07.553 00:13:07.553 FDP Reclaim unit handle status 00:13:07.553 ============================== 00:13:07.553 Number of RUHS descriptors: 2 00:13:07.553 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000005614 00:13:07.553 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:13:07.553 00:13:07.553 FDP write on placement id: 0 success 00:13:07.553 00:13:07.553 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:13:07.553 00:13:07.553 IO mgmt send: RUH update for Placement ID: #0 Success 00:13:07.553 00:13:07.553 Get Feature: FDP Events for Placement handle: #0 00:13:07.553 ======================== 00:13:07.553 Number of FDP Events: 6 00:13:07.553 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:13:07.553 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:13:07.553 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:13:07.553 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:13:07.553 FDP Event: #4 Type: Media Reallocated Enabled: No 00:13:07.553 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:13:07.553 00:13:07.553 FDP events log page 00:13:07.553 =================== 00:13:07.553 Number of FDP events: 1 00:13:07.553 FDP Event #0: 00:13:07.553 Event Type: RU Not Written to Capacity 00:13:07.553 Placement Identifier: Valid 00:13:07.553 NSID: Valid 00:13:07.553 Location: Valid 00:13:07.553 Placement Identifier: 0 00:13:07.553 Event Timestamp: 5 00:13:07.553 Namespace Identifier: 1 00:13:07.553 Reclaim Group Identifier: 0 00:13:07.553 Reclaim Unit Handle Identifier: 0 00:13:07.553 00:13:07.553 FDP test passed 00:13:07.553 00:13:07.553 real 0m0.232s 00:13:07.553 user 0m0.078s 00:13:07.553 sys 0m0.052s 00:13:07.553 19:31:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.553 19:31:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:13:07.553 ************************************ 00:13:07.553 END TEST nvme_flexible_data_placement 00:13:07.553 ************************************ 00:13:07.553 ************************************ 00:13:07.553 END TEST nvme_fdp 00:13:07.553 ************************************ 00:13:07.553 00:13:07.553 real 0m7.475s 00:13:07.553 user 0m1.097s 00:13:07.553 sys 0m1.330s 00:13:07.553 19:31:26 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.553 19:31:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:13:07.553 19:31:26 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:13:07.553 19:31:26 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:07.553 19:31:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:07.553 19:31:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.553 19:31:26 -- common/autotest_common.sh@10 -- # set +x 00:13:07.553 ************************************ 00:13:07.553 START TEST nvme_rpc 00:13:07.553 ************************************ 00:13:07.553 19:31:26 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:13:07.812 * Looking for test storage... 00:13:07.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.812 19:31:26 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:07.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.812 --rc genhtml_branch_coverage=1 00:13:07.812 --rc genhtml_function_coverage=1 00:13:07.812 --rc genhtml_legend=1 00:13:07.812 --rc geninfo_all_blocks=1 00:13:07.812 --rc geninfo_unexecuted_blocks=1 00:13:07.812 00:13:07.812 ' 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:07.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.812 --rc genhtml_branch_coverage=1 00:13:07.812 --rc genhtml_function_coverage=1 00:13:07.812 --rc genhtml_legend=1 00:13:07.812 --rc geninfo_all_blocks=1 00:13:07.812 --rc geninfo_unexecuted_blocks=1 00:13:07.812 00:13:07.812 ' 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:07.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.812 --rc genhtml_branch_coverage=1 00:13:07.812 --rc genhtml_function_coverage=1 00:13:07.812 --rc genhtml_legend=1 00:13:07.812 --rc geninfo_all_blocks=1 00:13:07.812 --rc geninfo_unexecuted_blocks=1 00:13:07.812 00:13:07.812 ' 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:07.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.812 --rc genhtml_branch_coverage=1 00:13:07.812 --rc genhtml_function_coverage=1 00:13:07.812 --rc genhtml_legend=1 00:13:07.812 --rc geninfo_all_blocks=1 00:13:07.812 --rc geninfo_unexecuted_blocks=1 00:13:07.812 00:13:07.812 ' 00:13:07.812 19:31:26 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:07.812 19:31:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:13:07.812 19:31:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:13:07.812 19:31:26 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65969 00:13:07.812 19:31:26 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:07.812 19:31:26 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:13:07.812 19:31:26 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65969 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 65969 ']' 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.812 19:31:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:07.812 [2024-12-05 19:31:26.763814] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:13:07.812 [2024-12-05 19:31:26.763934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65969 ] 00:13:08.071 [2024-12-05 19:31:26.924844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:08.071 [2024-12-05 19:31:27.022470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.071 [2024-12-05 19:31:27.022547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.638 19:31:27 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.638 19:31:27 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:08.638 19:31:27 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:13:08.897 Nvme0n1 00:13:08.897 19:31:27 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:13:08.897 19:31:27 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:13:09.155 request: 00:13:09.155 { 00:13:09.155 "bdev_name": "Nvme0n1", 00:13:09.155 "filename": "non_existing_file", 00:13:09.155 "method": "bdev_nvme_apply_firmware", 00:13:09.155 "req_id": 1 00:13:09.155 } 00:13:09.155 Got JSON-RPC error response 00:13:09.155 response: 00:13:09.155 { 00:13:09.155 "code": -32603, 00:13:09.155 "message": "open file failed." 00:13:09.155 } 00:13:09.155 19:31:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:13:09.155 19:31:28 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:13:09.155 19:31:28 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:13:09.413 19:31:28 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:09.413 19:31:28 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65969 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 65969 ']' 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 65969 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65969 00:13:09.413 killing process with pid 65969 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65969' 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@973 -- # kill 65969 00:13:09.413 19:31:28 nvme_rpc -- common/autotest_common.sh@978 -- # wait 65969 00:13:10.787 ************************************ 00:13:10.787 END TEST nvme_rpc 00:13:10.787 ************************************ 00:13:10.787 00:13:10.787 real 0m3.224s 00:13:10.787 user 0m6.172s 00:13:10.787 sys 0m0.466s 00:13:10.787 19:31:29 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:10.787 19:31:29 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:10.787 19:31:29 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:10.788 19:31:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:10.788 19:31:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:10.788 19:31:29 -- common/autotest_common.sh@10 -- # set +x 00:13:10.788 ************************************ 00:13:10.788 START TEST nvme_rpc_timeouts 00:13:10.788 ************************************ 00:13:10.788 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:13:11.046 * Looking for test storage... 00:13:11.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:11.046 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:11.046 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:13:11.046 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:11.046 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.046 19:31:29 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:13:11.046 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.046 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:11.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.046 --rc genhtml_branch_coverage=1 00:13:11.046 --rc genhtml_function_coverage=1 00:13:11.046 --rc genhtml_legend=1 00:13:11.046 --rc geninfo_all_blocks=1 00:13:11.046 --rc geninfo_unexecuted_blocks=1 00:13:11.046 00:13:11.046 ' 00:13:11.046 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:11.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.046 --rc genhtml_branch_coverage=1 00:13:11.046 --rc genhtml_function_coverage=1 00:13:11.046 --rc genhtml_legend=1 00:13:11.046 --rc geninfo_all_blocks=1 00:13:11.046 --rc geninfo_unexecuted_blocks=1 00:13:11.046 00:13:11.046 ' 00:13:11.046 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:11.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.046 --rc genhtml_branch_coverage=1 00:13:11.046 --rc genhtml_function_coverage=1 00:13:11.046 --rc genhtml_legend=1 00:13:11.046 --rc geninfo_all_blocks=1 00:13:11.046 --rc geninfo_unexecuted_blocks=1 00:13:11.047 00:13:11.047 ' 00:13:11.047 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:11.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.047 --rc genhtml_branch_coverage=1 00:13:11.047 --rc genhtml_function_coverage=1 00:13:11.047 --rc genhtml_legend=1 00:13:11.047 --rc geninfo_all_blocks=1 00:13:11.047 --rc geninfo_unexecuted_blocks=1 00:13:11.047 00:13:11.047 ' 00:13:11.047 19:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:11.047 19:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66034 00:13:11.047 19:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66034 00:13:11.047 19:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66066 00:13:11.047 19:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:13:11.047 19:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66066 00:13:11.047 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 66066 ']' 00:13:11.047 19:31:29 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:13:11.047 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.047 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.047 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.047 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.047 19:31:29 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:11.047 [2024-12-05 19:31:29.982001] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:13:11.047 [2024-12-05 19:31:29.982292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66066 ] 00:13:11.305 [2024-12-05 19:31:30.140353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:11.305 [2024-12-05 19:31:30.237566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.305 [2024-12-05 19:31:30.237712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.870 19:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.870 19:31:30 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:13:11.870 19:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:13:11.870 Checking default timeout settings: 00:13:11.870 19:31:30 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:12.435 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:13:12.435 Making settings changes with rpc: 00:13:12.435 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:13:12.435 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:13:12.435 Check default vs. modified settings: 00:13:12.435 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66034 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66034 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:13:12.997 Setting action_on_timeout is changed as expected. 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66034 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66034 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:12.997 Setting timeout_us is changed as expected. 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66034 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66034 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:13:12.997 Setting timeout_admin_us is changed as expected. 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66034 /tmp/settings_modified_66034 00:13:12.997 19:31:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66066 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 66066 ']' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 66066 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66066 00:13:12.997 killing process with pid 66066 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66066' 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 66066 00:13:12.997 19:31:31 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 66066 00:13:14.386 RPC TIMEOUT SETTING TEST PASSED. 00:13:14.386 19:31:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:13:14.386 ************************************ 00:13:14.386 END TEST nvme_rpc_timeouts 00:13:14.386 ************************************ 00:13:14.386 00:13:14.386 real 0m3.204s 00:13:14.386 user 0m6.397s 00:13:14.386 sys 0m0.423s 00:13:14.386 19:31:32 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.386 19:31:32 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:13:14.386 19:31:32 -- spdk/autotest.sh@239 -- # uname -s 00:13:14.386 19:31:33 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:13:14.386 19:31:33 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:14.386 19:31:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:14.386 19:31:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.386 19:31:33 -- common/autotest_common.sh@10 -- # set +x 00:13:14.386 ************************************ 00:13:14.386 START TEST sw_hotplug 00:13:14.386 ************************************ 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:13:14.386 * Looking for test storage... 00:13:14.386 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:14.386 19:31:33 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:14.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.386 --rc genhtml_branch_coverage=1 00:13:14.386 --rc genhtml_function_coverage=1 00:13:14.386 --rc genhtml_legend=1 00:13:14.386 --rc geninfo_all_blocks=1 00:13:14.386 --rc geninfo_unexecuted_blocks=1 00:13:14.386 00:13:14.386 ' 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:14.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.386 --rc genhtml_branch_coverage=1 00:13:14.386 --rc genhtml_function_coverage=1 00:13:14.386 --rc genhtml_legend=1 00:13:14.386 --rc geninfo_all_blocks=1 00:13:14.386 --rc geninfo_unexecuted_blocks=1 00:13:14.386 00:13:14.386 ' 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:14.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.386 --rc genhtml_branch_coverage=1 00:13:14.386 --rc genhtml_function_coverage=1 00:13:14.386 --rc genhtml_legend=1 00:13:14.386 --rc geninfo_all_blocks=1 00:13:14.386 --rc geninfo_unexecuted_blocks=1 00:13:14.386 00:13:14.386 ' 00:13:14.386 19:31:33 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:14.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:14.386 --rc genhtml_branch_coverage=1 00:13:14.386 --rc genhtml_function_coverage=1 00:13:14.386 --rc genhtml_legend=1 00:13:14.386 --rc geninfo_all_blocks=1 00:13:14.386 --rc geninfo_unexecuted_blocks=1 00:13:14.386 00:13:14.386 ' 00:13:14.386 19:31:33 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:14.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:14.643 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:14.643 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:14.643 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:14.643 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:14.643 19:31:33 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:13:14.643 19:31:33 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:13:14.643 19:31:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:13:14.643 19:31:33 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@233 -- # local class 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:14.643 19:31:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@18 -- # local i 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:13:14.644 19:31:33 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:13:14.644 19:31:33 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:13:14.644 19:31:33 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:13:14.644 19:31:33 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:14.901 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:15.159 Waiting for block devices as requested 00:13:15.159 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.159 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.159 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.416 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.677 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:20.677 19:31:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:13:20.677 19:31:39 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:20.677 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:13:20.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:20.677 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:13:20.937 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:13:21.199 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.199 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:13:21.199 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:13:21.199 19:31:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:21.459 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66918 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:13:21.460 19:31:40 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:21.460 19:31:40 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:21.460 19:31:40 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:21.460 19:31:40 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:21.460 19:31:40 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:21.460 19:31:40 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:21.460 Initializing NVMe Controllers 00:13:21.460 Attaching to 0000:00:10.0 00:13:21.460 Attaching to 0000:00:11.0 00:13:21.460 Attached to 0000:00:11.0 00:13:21.460 Attached to 0000:00:10.0 00:13:21.460 Initialization complete. Starting I/O... 00:13:21.460 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:13:21.460 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:13:21.460 00:13:22.835 QEMU NVMe Ctrl (12341 ): 2506 I/Os completed (+2506) 00:13:22.835 QEMU NVMe Ctrl (12340 ): 2511 I/Os completed (+2511) 00:13:22.835 00:13:23.769 QEMU NVMe Ctrl (12341 ): 5598 I/Os completed (+3092) 00:13:23.769 QEMU NVMe Ctrl (12340 ): 5598 I/Os completed (+3087) 00:13:23.769 00:13:24.704 QEMU NVMe Ctrl (12341 ): 8889 I/Os completed (+3291) 00:13:24.704 QEMU NVMe Ctrl (12340 ): 8886 I/Os completed (+3288) 00:13:24.704 00:13:25.638 QEMU NVMe Ctrl (12341 ): 12547 I/Os completed (+3658) 00:13:25.638 QEMU NVMe Ctrl (12340 ): 12534 I/Os completed (+3648) 00:13:25.638 00:13:26.572 QEMU NVMe Ctrl (12341 ): 16183 I/Os completed (+3636) 00:13:26.572 QEMU NVMe Ctrl (12340 ): 16169 I/Os completed (+3635) 00:13:26.572 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:27.505 [2024-12-05 19:31:46.261186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:27.505 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:27.505 [2024-12-05 19:31:46.262359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.262470] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.262500] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.262555] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:27.505 [2024-12-05 19:31:46.263999] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.264093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.264183] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.264210] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:27.505 [2024-12-05 19:31:46.283371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:27.505 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:27.505 [2024-12-05 19:31:46.284397] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.284503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.284536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.284595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:27.505 [2024-12-05 19:31:46.286104] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.286204] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.286232] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 [2024-12-05 19:31:46.286282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:27.505 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:27.505 EAL: Scan for (pci) bus failed. 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:27.505 Attaching to 0000:00:10.0 00:13:27.505 Attached to 0000:00:10.0 00:13:27.505 QEMU NVMe Ctrl (12340 ): 84 I/Os completed (+84) 00:13:27.505 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:27.505 19:31:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:27.505 Attaching to 0000:00:11.0 00:13:27.505 Attached to 0000:00:11.0 00:13:28.877 QEMU NVMe Ctrl (12340 ): 3786 I/Os completed (+3702) 00:13:28.877 QEMU NVMe Ctrl (12341 ): 3478 I/Os completed (+3478) 00:13:28.877 00:13:29.810 QEMU NVMe Ctrl (12340 ): 7220 I/Os completed (+3434) 00:13:29.810 QEMU NVMe Ctrl (12341 ): 6931 I/Os completed (+3453) 00:13:29.810 00:13:30.740 QEMU NVMe Ctrl (12340 ): 10283 I/Os completed (+3063) 00:13:30.740 QEMU NVMe Ctrl (12341 ): 9984 I/Os completed (+3053) 00:13:30.740 00:13:31.673 QEMU NVMe Ctrl (12340 ): 13474 I/Os completed (+3191) 00:13:31.673 QEMU NVMe Ctrl (12341 ): 13190 I/Os completed (+3206) 00:13:31.673 00:13:32.606 QEMU NVMe Ctrl (12340 ): 17097 I/Os completed (+3623) 00:13:32.606 QEMU NVMe Ctrl (12341 ): 16883 I/Os completed (+3693) 00:13:32.606 00:13:33.632 QEMU NVMe Ctrl (12340 ): 20787 I/Os completed (+3690) 00:13:33.632 QEMU NVMe Ctrl (12341 ): 20551 I/Os completed (+3668) 00:13:33.632 00:13:34.564 QEMU NVMe Ctrl (12340 ): 24459 I/Os completed (+3672) 00:13:34.564 QEMU NVMe Ctrl (12341 ): 24218 I/Os completed (+3667) 00:13:34.564 00:13:35.495 QEMU NVMe Ctrl (12340 ): 27708 I/Os completed (+3249) 00:13:35.495 QEMU NVMe Ctrl (12341 ): 27463 I/Os completed (+3245) 00:13:35.495 00:13:36.867 QEMU NVMe Ctrl (12340 ): 31157 I/Os completed (+3449) 00:13:36.867 QEMU NVMe Ctrl (12341 ): 31093 I/Os completed (+3630) 00:13:36.867 00:13:37.800 QEMU NVMe Ctrl (12340 ): 34568 I/Os completed (+3411) 00:13:37.800 QEMU NVMe Ctrl (12341 ): 34543 I/Os completed (+3450) 00:13:37.800 00:13:38.733 QEMU NVMe Ctrl (12340 ): 37774 I/Os completed (+3206) 00:13:38.733 QEMU NVMe Ctrl (12341 ): 37654 I/Os completed (+3111) 00:13:38.733 00:13:39.667 QEMU NVMe Ctrl (12340 ): 40989 I/Os completed (+3215) 00:13:39.667 QEMU NVMe Ctrl (12341 ): 40808 I/Os completed (+3154) 00:13:39.667 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:39.667 [2024-12-05 19:31:58.509611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:39.667 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:39.667 [2024-12-05 19:31:58.511408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.511448] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.511462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.511476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:39.667 [2024-12-05 19:31:58.512996] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.513120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.513151] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.513163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:39.667 [2024-12-05 19:31:58.530673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:39.667 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:39.667 [2024-12-05 19:31:58.531558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.531612] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.531641] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.531666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:39.667 [2024-12-05 19:31:58.533099] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.533198] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.533215] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 [2024-12-05 19:31:58.533227] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:39.667 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:13:39.667 EAL: Scan for (pci) bus failed. 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:39.667 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:39.926 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:39.926 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:39.926 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:39.926 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:39.926 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:39.926 Attaching to 0000:00:10.0 00:13:39.926 Attached to 0000:00:10.0 00:13:39.926 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:39.926 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:39.926 19:31:58 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:39.926 Attaching to 0000:00:11.0 00:13:39.926 Attached to 0000:00:11.0 00:13:40.491 QEMU NVMe Ctrl (12340 ): 2782 I/Os completed (+2782) 00:13:40.491 QEMU NVMe Ctrl (12341 ): 2516 I/Os completed (+2516) 00:13:40.491 00:13:41.864 QEMU NVMe Ctrl (12340 ): 6124 I/Os completed (+3342) 00:13:41.864 QEMU NVMe Ctrl (12341 ): 5870 I/Os completed (+3354) 00:13:41.864 00:13:42.799 QEMU NVMe Ctrl (12340 ): 9757 I/Os completed (+3633) 00:13:42.799 QEMU NVMe Ctrl (12341 ): 9478 I/Os completed (+3608) 00:13:42.799 00:13:43.732 QEMU NVMe Ctrl (12340 ): 13366 I/Os completed (+3609) 00:13:43.732 QEMU NVMe Ctrl (12341 ): 13111 I/Os completed (+3633) 00:13:43.732 00:13:44.665 QEMU NVMe Ctrl (12340 ): 16899 I/Os completed (+3533) 00:13:44.665 QEMU NVMe Ctrl (12341 ): 16653 I/Os completed (+3542) 00:13:44.665 00:13:45.598 QEMU NVMe Ctrl (12340 ): 20705 I/Os completed (+3806) 00:13:45.598 QEMU NVMe Ctrl (12341 ): 20075 I/Os completed (+3422) 00:13:45.598 00:13:46.533 QEMU NVMe Ctrl (12340 ): 24204 I/Os completed (+3499) 00:13:46.533 QEMU NVMe Ctrl (12341 ): 23595 I/Os completed (+3520) 00:13:46.533 00:13:47.467 QEMU NVMe Ctrl (12340 ): 27802 I/Os completed (+3598) 00:13:47.467 QEMU NVMe Ctrl (12341 ): 27209 I/Os completed (+3614) 00:13:47.467 00:13:48.840 QEMU NVMe Ctrl (12340 ): 31319 I/Os completed (+3517) 00:13:48.840 QEMU NVMe Ctrl (12341 ): 30678 I/Os completed (+3469) 00:13:48.840 00:13:49.772 QEMU NVMe Ctrl (12340 ): 34775 I/Os completed (+3456) 00:13:49.772 QEMU NVMe Ctrl (12341 ): 34036 I/Os completed (+3358) 00:13:49.772 00:13:50.705 QEMU NVMe Ctrl (12340 ): 37945 I/Os completed (+3170) 00:13:50.706 QEMU NVMe Ctrl (12341 ): 37171 I/Os completed (+3135) 00:13:50.706 00:13:51.640 QEMU NVMe Ctrl (12340 ): 41537 I/Os completed (+3592) 00:13:51.640 QEMU NVMe Ctrl (12341 ): 40747 I/Os completed (+3576) 00:13:51.640 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:51.899 [2024-12-05 19:32:10.758408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:51.899 Controller removed: QEMU NVMe Ctrl (12340 ) 00:13:51.899 [2024-12-05 19:32:10.760102] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.760231] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.760262] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.760318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:51.899 [2024-12-05 19:32:10.762020] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.762122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.762175] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.762233] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:51.899 [2024-12-05 19:32:10.783404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:51.899 Controller removed: QEMU NVMe Ctrl (12341 ) 00:13:51.899 [2024-12-05 19:32:10.784363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.784456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.784485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.784538] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:51.899 [2024-12-05 19:32:10.785967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.786058] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.786089] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 [2024-12-05 19:32:10.786153] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:51.899 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:52.157 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:52.157 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:52.157 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:52.157 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:52.158 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:52.158 Attaching to 0000:00:10.0 00:13:52.158 Attached to 0000:00:10.0 00:13:52.158 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:52.158 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:52.158 19:32:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:52.158 Attaching to 0000:00:11.0 00:13:52.158 Attached to 0000:00:11.0 00:13:52.158 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:13:52.158 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:13:52.158 [2024-12-05 19:32:11.013787] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:14:04.470 19:32:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:14:04.470 19:32:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:04.470 19:32:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.75 00:14:04.470 19:32:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.75 00:14:04.470 19:32:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:04.470 19:32:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.75 00:14:04.470 19:32:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.75 2 00:14:04.470 remove_attach_helper took 42.75s to complete (handling 2 nvme drive(s)) 19:32:23 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66918 00:14:11.022 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66918) - No such process 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66918 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67468 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:14:11.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67468 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 67468 ']' 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.022 [2024-12-05 19:32:29.091002] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:14:11.022 [2024-12-05 19:32:29.091689] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67468 ] 00:14:11.022 [2024-12-05 19:32:29.259286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.022 [2024-12-05 19:32:29.356122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:11.022 19:32:29 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:11.022 19:32:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:17.593 19:32:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:17.593 19:32:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.593 19:32:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:17.593 19:32:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.593 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:17.593 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:17.593 [2024-12-05 19:32:36.045931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:17.593 [2024-12-05 19:32:36.047243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.593 [2024-12-05 19:32:36.047279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.593 [2024-12-05 19:32:36.047293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.594 [2024-12-05 19:32:36.047311] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.594 [2024-12-05 19:32:36.047318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.594 [2024-12-05 19:32:36.047327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.594 [2024-12-05 19:32:36.047334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.594 [2024-12-05 19:32:36.047342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.594 [2024-12-05 19:32:36.047349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.594 [2024-12-05 19:32:36.047360] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.594 [2024-12-05 19:32:36.047366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.594 [2024-12-05 19:32:36.047374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.594 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:17.594 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:17.594 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:17.594 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:17.594 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:17.594 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:17.594 19:32:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.594 19:32:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:17.594 [2024-12-05 19:32:36.545924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:17.594 [2024-12-05 19:32:36.547187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.594 [2024-12-05 19:32:36.547217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.594 [2024-12-05 19:32:36.547230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.594 [2024-12-05 19:32:36.547246] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.594 [2024-12-05 19:32:36.547254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.594 [2024-12-05 19:32:36.547261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.594 [2024-12-05 19:32:36.547270] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.594 [2024-12-05 19:32:36.547278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.594 [2024-12-05 19:32:36.547286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.594 [2024-12-05 19:32:36.547293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:17.594 [2024-12-05 19:32:36.547301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:17.594 [2024-12-05 19:32:36.547308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:17.594 19:32:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.594 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:17.594 19:32:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:18.164 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:18.164 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:18.164 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:18.164 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:18.164 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:18.164 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:18.164 19:32:37 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.164 19:32:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:18.164 19:32:37 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.164 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:18.164 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:18.425 19:32:37 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:30.659 19:32:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.659 19:32:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.659 19:32:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:30.659 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:30.659 19:32:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.659 19:32:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.659 [2024-12-05 19:32:49.446124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:30.659 [2024-12-05 19:32:49.447407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.660 [2024-12-05 19:32:49.447444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.660 [2024-12-05 19:32:49.447455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.660 [2024-12-05 19:32:49.447471] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.660 [2024-12-05 19:32:49.447478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.660 [2024-12-05 19:32:49.447486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.660 [2024-12-05 19:32:49.447494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.660 [2024-12-05 19:32:49.447502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.660 [2024-12-05 19:32:49.447508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.660 [2024-12-05 19:32:49.447516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:30.660 [2024-12-05 19:32:49.447523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.660 [2024-12-05 19:32:49.447530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.660 19:32:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.660 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:30.660 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:31.232 [2024-12-05 19:32:49.946149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:31.232 [2024-12-05 19:32:49.947435] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.232 [2024-12-05 19:32:49.947585] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.232 [2024-12-05 19:32:49.947604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.232 [2024-12-05 19:32:49.947620] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.232 [2024-12-05 19:32:49.947629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.232 [2024-12-05 19:32:49.947637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.232 [2024-12-05 19:32:49.947647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.232 [2024-12-05 19:32:49.947653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.232 [2024-12-05 19:32:49.947661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.232 [2024-12-05 19:32:49.947668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:31.232 [2024-12-05 19:32:49.947676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.232 [2024-12-05 19:32:49.947682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.232 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:31.232 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:31.232 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:31.232 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:31.232 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:31.232 19:32:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:31.232 19:32:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.232 19:32:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:31.232 19:32:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:31.232 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:31.494 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:31.494 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:31.494 19:32:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:43.770 19:33:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.770 19:33:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:43.770 19:33:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:43.770 19:33:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.770 19:33:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:43.770 [2024-12-05 19:33:02.346335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:43.770 [2024-12-05 19:33:02.347668] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.770 [2024-12-05 19:33:02.347703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.770 [2024-12-05 19:33:02.347714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.770 [2024-12-05 19:33:02.347732] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.770 [2024-12-05 19:33:02.347739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.770 [2024-12-05 19:33:02.347749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.770 [2024-12-05 19:33:02.347756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.770 [2024-12-05 19:33:02.347764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.770 [2024-12-05 19:33:02.347771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.770 [2024-12-05 19:33:02.347781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.770 [2024-12-05 19:33:02.347787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.770 [2024-12-05 19:33:02.347795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.770 19:33:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:43.770 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:43.770 [2024-12-05 19:33:02.746335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:43.770 [2024-12-05 19:33:02.747706] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.770 [2024-12-05 19:33:02.747736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.770 [2024-12-05 19:33:02.747748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.770 [2024-12-05 19:33:02.747763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.770 [2024-12-05 19:33:02.747771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.770 [2024-12-05 19:33:02.747778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.770 [2024-12-05 19:33:02.747787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.770 [2024-12-05 19:33:02.747794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.770 [2024-12-05 19:33:02.747803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:43.770 [2024-12-05 19:33:02.747810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:43.770 [2024-12-05 19:33:02.747818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:43.770 [2024-12-05 19:33:02.747825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:44.031 19:33:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.031 19:33:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:44.031 19:33:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:44.031 19:33:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:44.293 19:33:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:44.293 19:33:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:44.293 19:33:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:44.293 19:33:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:44.293 19:33:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:44.293 19:33:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:44.293 19:33:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:44.293 19:33:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:14:56.559 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:14:56.559 19:33:15 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:14:56.559 19:33:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.142 19:33:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.142 19:33:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.142 19:33:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:15:03.142 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:03.142 [2024-12-05 19:33:21.290151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:03.142 [2024-12-05 19:33:21.291187] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.142 [2024-12-05 19:33:21.291301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.142 [2024-12-05 19:33:21.291316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.142 [2024-12-05 19:33:21.291335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.142 [2024-12-05 19:33:21.291343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.142 [2024-12-05 19:33:21.291351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.142 [2024-12-05 19:33:21.291358] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.142 [2024-12-05 19:33:21.291366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.142 [2024-12-05 19:33:21.291373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.143 [2024-12-05 19:33:21.291382] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.143 [2024-12-05 19:33:21.291388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.143 [2024-12-05 19:33:21.291400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.143 [2024-12-05 19:33:21.690152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:03.143 [2024-12-05 19:33:21.691307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.143 [2024-12-05 19:33:21.691337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.143 [2024-12-05 19:33:21.691349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.143 [2024-12-05 19:33:21.691363] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.143 [2024-12-05 19:33:21.691374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.143 [2024-12-05 19:33:21.691381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.143 [2024-12-05 19:33:21.691390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.143 [2024-12-05 19:33:21.691396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.143 [2024-12-05 19:33:21.691404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.143 [2024-12-05 19:33:21.691411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:03.143 [2024-12-05 19:33:21.691419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:03.143 [2024-12-05 19:33:21.691425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:03.143 19:33:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.143 19:33:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:03.143 19:33:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:03.143 19:33:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:03.143 19:33:22 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:03.143 19:33:22 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:03.143 19:33:22 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:15.375 19:33:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.375 19:33:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:15.375 19:33:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:15.375 19:33:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:15.375 19:33:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:15.375 [2024-12-05 19:33:34.090367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:15.375 [2024-12-05 19:33:34.091615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.375 [2024-12-05 19:33:34.091644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.375 [2024-12-05 19:33:34.091655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.375 [2024-12-05 19:33:34.091672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.375 [2024-12-05 19:33:34.091680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.375 [2024-12-05 19:33:34.091689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.375 [2024-12-05 19:33:34.091697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.375 [2024-12-05 19:33:34.091705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.375 [2024-12-05 19:33:34.091712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.375 [2024-12-05 19:33:34.091721] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.375 [2024-12-05 19:33:34.091727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.375 [2024-12-05 19:33:34.091735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.375 19:33:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:15.375 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:15.636 [2024-12-05 19:33:34.490387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:15.636 [2024-12-05 19:33:34.491405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.636 [2024-12-05 19:33:34.491437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.636 [2024-12-05 19:33:34.491450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.636 [2024-12-05 19:33:34.491466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.636 [2024-12-05 19:33:34.491479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.636 [2024-12-05 19:33:34.491486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.636 [2024-12-05 19:33:34.491496] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.636 [2024-12-05 19:33:34.491503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.636 [2024-12-05 19:33:34.491511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.636 [2024-12-05 19:33:34.491518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:15.636 [2024-12-05 19:33:34.491525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:15.636 [2024-12-05 19:33:34.491532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:15.636 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:15.636 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:15.636 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:15.636 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:15.636 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:15.636 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:15.636 19:33:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.636 19:33:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:15.636 19:33:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:15.898 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:16.158 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:16.158 19:33:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:28.460 19:33:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.460 19:33:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:28.460 19:33:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:28.460 [2024-12-05 19:33:46.990588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:28.460 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:28.460 [2024-12-05 19:33:46.992034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.461 19:33:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:28.461 [2024-12-05 19:33:46.992144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.461 [2024-12-05 19:33:46.992216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.461 [2024-12-05 19:33:46.992296] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.461 [2024-12-05 19:33:46.992351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.461 [2024-12-05 19:33:46.992384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.461 [2024-12-05 19:33:46.992412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.461 [2024-12-05 19:33:46.992469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.461 [2024-12-05 19:33:46.992496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.461 [2024-12-05 19:33:46.992522] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.461 [2024-12-05 19:33:46.992539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.461 [2024-12-05 19:33:46.992601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.461 19:33:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.461 19:33:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:28.461 19:33:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.461 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:28.461 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:28.722 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:28.722 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:28.722 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:28.722 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:28.722 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:28.722 19:33:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.722 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:28.722 19:33:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:28.722 19:33:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.722 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:15:28.722 19:33:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:15:28.722 [2024-12-05 19:33:47.690606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:15:28.722 [2024-12-05 19:33:47.691768] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.722 [2024-12-05 19:33:47.691867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.722 [2024-12-05 19:33:47.691932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.722 [2024-12-05 19:33:47.691966] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.722 [2024-12-05 19:33:47.691984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.722 [2024-12-05 19:33:47.692104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.722 [2024-12-05 19:33:47.692148] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.722 [2024-12-05 19:33:47.692167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.722 [2024-12-05 19:33:47.692222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.722 [2024-12-05 19:33:47.692250] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:15:28.722 [2024-12-05 19:33:47.692270] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.722 [2024-12-05 19:33:47.692294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:29.295 19:33:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:29.295 19:33:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:29.295 19:33:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:15:29.295 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:15:29.556 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:15:29.556 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:15:29.556 19:33:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:15:41.812 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.18 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.18 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.18 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.18 2 00:15:41.813 remove_attach_helper took 45.18s to complete (handling 2 nvme drive(s)) 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:15:41.813 19:34:00 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67468 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 67468 ']' 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 67468 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67468 00:15:41.813 killing process with pid 67468 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67468' 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@973 -- # kill 67468 00:15:41.813 19:34:00 sw_hotplug -- common/autotest_common.sh@978 -- # wait 67468 00:15:42.754 19:34:01 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:43.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:43.599 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:43.599 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:43.599 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:43.599 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:43.599 00:15:43.599 real 2m29.485s 00:15:43.599 user 1m51.819s 00:15:43.599 sys 0m16.414s 00:15:43.599 ************************************ 00:15:43.599 END TEST sw_hotplug 00:15:43.599 ************************************ 00:15:43.599 19:34:02 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.599 19:34:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:15:43.599 19:34:02 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:15:43.599 19:34:02 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:43.599 19:34:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:43.599 19:34:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.599 19:34:02 -- common/autotest_common.sh@10 -- # set +x 00:15:43.599 ************************************ 00:15:43.599 START TEST nvme_xnvme 00:15:43.599 ************************************ 00:15:43.599 19:34:02 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:15:43.599 * Looking for test storage... 00:15:43.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:43.599 19:34:02 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.865 19:34:02 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.865 --rc genhtml_branch_coverage=1 00:15:43.865 --rc genhtml_function_coverage=1 00:15:43.865 --rc genhtml_legend=1 00:15:43.865 --rc geninfo_all_blocks=1 00:15:43.865 --rc geninfo_unexecuted_blocks=1 00:15:43.865 00:15:43.865 ' 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.865 --rc genhtml_branch_coverage=1 00:15:43.865 --rc genhtml_function_coverage=1 00:15:43.865 --rc genhtml_legend=1 00:15:43.865 --rc geninfo_all_blocks=1 00:15:43.865 --rc geninfo_unexecuted_blocks=1 00:15:43.865 00:15:43.865 ' 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.865 --rc genhtml_branch_coverage=1 00:15:43.865 --rc genhtml_function_coverage=1 00:15:43.865 --rc genhtml_legend=1 00:15:43.865 --rc geninfo_all_blocks=1 00:15:43.865 --rc geninfo_unexecuted_blocks=1 00:15:43.865 00:15:43.865 ' 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:43.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.865 --rc genhtml_branch_coverage=1 00:15:43.865 --rc genhtml_function_coverage=1 00:15:43.865 --rc genhtml_legend=1 00:15:43.865 --rc geninfo_all_blocks=1 00:15:43.865 --rc geninfo_unexecuted_blocks=1 00:15:43.865 00:15:43.865 ' 00:15:43.865 19:34:02 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:15:43.865 19:34:02 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:15:43.865 19:34:02 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:43.865 19:34:02 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:43.866 19:34:02 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:43.866 19:34:02 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:43.866 #define SPDK_CONFIG_H 00:15:43.866 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:43.866 #define SPDK_CONFIG_APPS 1 00:15:43.866 #define SPDK_CONFIG_ARCH native 00:15:43.866 #define SPDK_CONFIG_ASAN 1 00:15:43.866 #undef SPDK_CONFIG_AVAHI 00:15:43.866 #undef SPDK_CONFIG_CET 00:15:43.866 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:43.866 #define SPDK_CONFIG_COVERAGE 1 00:15:43.866 #define SPDK_CONFIG_CROSS_PREFIX 00:15:43.866 #undef SPDK_CONFIG_CRYPTO 00:15:43.866 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:43.866 #undef SPDK_CONFIG_CUSTOMOCF 00:15:43.866 #undef SPDK_CONFIG_DAOS 00:15:43.866 #define SPDK_CONFIG_DAOS_DIR 00:15:43.866 #define SPDK_CONFIG_DEBUG 1 00:15:43.866 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:43.866 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:15:43.866 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:43.866 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:43.866 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:43.866 #undef SPDK_CONFIG_DPDK_UADK 00:15:43.866 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:15:43.866 #define SPDK_CONFIG_EXAMPLES 1 00:15:43.866 #undef SPDK_CONFIG_FC 00:15:43.866 #define SPDK_CONFIG_FC_PATH 00:15:43.866 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:43.866 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:43.866 #define SPDK_CONFIG_FSDEV 1 00:15:43.866 #undef SPDK_CONFIG_FUSE 00:15:43.866 #undef SPDK_CONFIG_FUZZER 00:15:43.866 #define SPDK_CONFIG_FUZZER_LIB 00:15:43.866 #undef SPDK_CONFIG_GOLANG 00:15:43.866 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:43.866 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:43.866 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:43.866 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:43.866 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:43.866 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:43.866 #undef SPDK_CONFIG_HAVE_LZ4 00:15:43.866 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:43.866 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:43.866 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:43.866 #define SPDK_CONFIG_IDXD 1 00:15:43.866 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:43.866 #undef SPDK_CONFIG_IPSEC_MB 00:15:43.866 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:43.866 #define SPDK_CONFIG_ISAL 1 00:15:43.866 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:43.866 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:43.866 #define SPDK_CONFIG_LIBDIR 00:15:43.866 #undef SPDK_CONFIG_LTO 00:15:43.866 #define SPDK_CONFIG_MAX_LCORES 128 00:15:43.866 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:43.866 #define SPDK_CONFIG_NVME_CUSE 1 00:15:43.866 #undef SPDK_CONFIG_OCF 00:15:43.866 #define SPDK_CONFIG_OCF_PATH 00:15:43.866 #define SPDK_CONFIG_OPENSSL_PATH 00:15:43.866 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:43.866 #define SPDK_CONFIG_PGO_DIR 00:15:43.866 #undef SPDK_CONFIG_PGO_USE 00:15:43.866 #define SPDK_CONFIG_PREFIX /usr/local 00:15:43.866 #undef SPDK_CONFIG_RAID5F 00:15:43.866 #undef SPDK_CONFIG_RBD 00:15:43.866 #define SPDK_CONFIG_RDMA 1 00:15:43.866 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:43.866 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:43.866 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:43.866 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:43.866 #define SPDK_CONFIG_SHARED 1 00:15:43.866 #undef SPDK_CONFIG_SMA 00:15:43.866 #define SPDK_CONFIG_TESTS 1 00:15:43.866 #undef SPDK_CONFIG_TSAN 00:15:43.866 #define SPDK_CONFIG_UBLK 1 00:15:43.866 #define SPDK_CONFIG_UBSAN 1 00:15:43.866 #undef SPDK_CONFIG_UNIT_TESTS 00:15:43.866 #undef SPDK_CONFIG_URING 00:15:43.866 #define SPDK_CONFIG_URING_PATH 00:15:43.866 #undef SPDK_CONFIG_URING_ZNS 00:15:43.866 #undef SPDK_CONFIG_USDT 00:15:43.866 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:43.866 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:43.866 #undef SPDK_CONFIG_VFIO_USER 00:15:43.866 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:43.866 #define SPDK_CONFIG_VHOST 1 00:15:43.866 #define SPDK_CONFIG_VIRTIO 1 00:15:43.866 #undef SPDK_CONFIG_VTUNE 00:15:43.866 #define SPDK_CONFIG_VTUNE_DIR 00:15:43.866 #define SPDK_CONFIG_WERROR 1 00:15:43.866 #define SPDK_CONFIG_WPDK_DIR 00:15:43.866 #define SPDK_CONFIG_XNVME 1 00:15:43.866 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:43.866 19:34:02 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:43.866 19:34:02 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.866 19:34:02 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.866 19:34:02 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.866 19:34:02 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.866 19:34:02 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.866 19:34:02 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.866 19:34:02 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.866 19:34:02 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.866 19:34:02 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:43.867 19:34:02 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@68 -- # uname -s 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:15:43.867 19:34:02 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:43.867 19:34:02 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 68822 ]] 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 68822 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.IGX9YM 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.IGX9YM/tests/xnvme /tmp/spdk.IGX9YM 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974683648 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593071616 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:43.868 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974683648 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593071616 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265245696 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=96483938304 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=3218841600 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:43.869 * Looking for test storage... 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974683648 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:43.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:43.869 19:34:02 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:43.869 19:34:02 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:15:43.870 19:34:02 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:43.870 19:34:02 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:43.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.870 --rc genhtml_branch_coverage=1 00:15:43.870 --rc genhtml_function_coverage=1 00:15:43.870 --rc genhtml_legend=1 00:15:43.870 --rc geninfo_all_blocks=1 00:15:43.870 --rc geninfo_unexecuted_blocks=1 00:15:43.870 00:15:43.870 ' 00:15:43.870 19:34:02 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:43.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.870 --rc genhtml_branch_coverage=1 00:15:43.870 --rc genhtml_function_coverage=1 00:15:43.870 --rc genhtml_legend=1 00:15:43.870 --rc geninfo_all_blocks=1 00:15:43.870 --rc geninfo_unexecuted_blocks=1 00:15:43.870 00:15:43.870 ' 00:15:43.870 19:34:02 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:43.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.870 --rc genhtml_branch_coverage=1 00:15:43.870 --rc genhtml_function_coverage=1 00:15:43.870 --rc genhtml_legend=1 00:15:43.870 --rc geninfo_all_blocks=1 00:15:43.870 --rc geninfo_unexecuted_blocks=1 00:15:43.870 00:15:43.870 ' 00:15:43.870 19:34:02 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:43.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:43.870 --rc genhtml_branch_coverage=1 00:15:43.870 --rc genhtml_function_coverage=1 00:15:43.870 --rc genhtml_legend=1 00:15:43.870 --rc geninfo_all_blocks=1 00:15:43.870 --rc geninfo_unexecuted_blocks=1 00:15:43.870 00:15:43.870 ' 00:15:43.870 19:34:02 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:43.870 19:34:02 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:43.870 19:34:02 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.870 19:34:02 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.870 19:34:02 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.870 19:34:02 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:15:43.870 19:34:02 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:15:43.870 19:34:02 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:44.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:44.395 Waiting for block devices as requested 00:15:44.395 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:15:44.395 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:15:44.657 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:15:44.657 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:15:49.965 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:15:49.965 19:34:08 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:15:49.965 19:34:08 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:15:49.965 19:34:08 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:15:50.225 19:34:09 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:15:50.225 19:34:09 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:15:50.225 No valid GPT data, bailing 00:15:50.225 19:34:09 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:15:50.225 19:34:09 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:15:50.225 19:34:09 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:50.225 19:34:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:50.225 19:34:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:50.225 19:34:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.225 19:34:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:50.225 ************************************ 00:15:50.225 START TEST xnvme_rpc 00:15:50.225 ************************************ 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69213 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69213 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69213 ']' 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:50.225 19:34:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:50.486 [2024-12-05 19:34:09.296840] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:15:50.486 [2024-12-05 19:34:09.296982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69213 ] 00:15:50.486 [2024-12-05 19:34:09.461789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.747 [2024-12-05 19:34:09.594645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 xnvme_bdev 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69213 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69213 ']' 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69213 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69213 00:15:51.687 killing process with pid 69213 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69213' 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69213 00:15:51.687 19:34:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69213 00:15:53.600 ************************************ 00:15:53.600 END TEST xnvme_rpc 00:15:53.600 ************************************ 00:15:53.600 00:15:53.600 real 0m3.038s 00:15:53.600 user 0m3.077s 00:15:53.600 sys 0m0.509s 00:15:53.600 19:34:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:53.600 19:34:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:53.600 19:34:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:53.600 19:34:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:53.600 19:34:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:53.600 19:34:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:53.600 ************************************ 00:15:53.600 START TEST xnvme_bdevperf 00:15:53.600 ************************************ 00:15:53.600 19:34:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:53.600 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:53.600 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:53.600 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:53.600 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:53.600 19:34:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:53.600 19:34:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:53.600 19:34:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:53.600 { 00:15:53.600 "subsystems": [ 00:15:53.600 { 00:15:53.600 "subsystem": "bdev", 00:15:53.600 "config": [ 00:15:53.600 { 00:15:53.600 "params": { 00:15:53.600 "io_mechanism": "libaio", 00:15:53.600 "conserve_cpu": false, 00:15:53.600 "filename": "/dev/nvme0n1", 00:15:53.600 "name": "xnvme_bdev" 00:15:53.600 }, 00:15:53.600 "method": "bdev_xnvme_create" 00:15:53.600 }, 00:15:53.600 { 00:15:53.600 "method": "bdev_wait_for_examine" 00:15:53.600 } 00:15:53.600 ] 00:15:53.600 } 00:15:53.600 ] 00:15:53.600 } 00:15:53.600 [2024-12-05 19:34:12.392925] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:15:53.600 [2024-12-05 19:34:12.393318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69287 ] 00:15:53.600 [2024-12-05 19:34:12.555921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.861 [2024-12-05 19:34:12.690970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.122 Running I/O for 5 seconds... 00:15:56.095 35254.00 IOPS, 137.71 MiB/s [2024-12-05T19:34:16.040Z] 33929.50 IOPS, 132.54 MiB/s [2024-12-05T19:34:17.427Z] 32185.00 IOPS, 125.72 MiB/s [2024-12-05T19:34:18.371Z] 31217.25 IOPS, 121.94 MiB/s [2024-12-05T19:34:18.371Z] 30796.60 IOPS, 120.30 MiB/s 00:15:59.365 Latency(us) 00:15:59.365 [2024-12-05T19:34:18.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.365 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:59.365 xnvme_bdev : 5.01 30770.34 120.20 0.00 0.00 2075.43 434.81 9729.58 00:15:59.365 [2024-12-05T19:34:18.371Z] =================================================================================================================== 00:15:59.365 [2024-12-05T19:34:18.371Z] Total : 30770.34 120.20 0.00 0.00 2075.43 434.81 9729.58 00:15:59.936 19:34:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:59.936 19:34:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:59.936 19:34:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:59.936 19:34:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:59.936 19:34:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:59.936 { 00:15:59.936 "subsystems": [ 00:15:59.936 { 00:15:59.936 "subsystem": "bdev", 00:15:59.936 "config": [ 00:15:59.936 { 00:15:59.936 "params": { 00:15:59.936 "io_mechanism": "libaio", 00:15:59.936 "conserve_cpu": false, 00:15:59.936 "filename": "/dev/nvme0n1", 00:15:59.936 "name": "xnvme_bdev" 00:15:59.936 }, 00:15:59.936 "method": "bdev_xnvme_create" 00:15:59.936 }, 00:15:59.936 { 00:15:59.936 "method": "bdev_wait_for_examine" 00:15:59.936 } 00:15:59.936 ] 00:15:59.936 } 00:15:59.936 ] 00:15:59.936 } 00:15:59.936 [2024-12-05 19:34:18.897015] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:15:59.936 [2024-12-05 19:34:18.897705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69363 ] 00:16:00.196 [2024-12-05 19:34:19.059311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.196 [2024-12-05 19:34:19.188511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.767 Running I/O for 5 seconds... 00:16:02.654 33392.00 IOPS, 130.44 MiB/s [2024-12-05T19:34:22.605Z] 34055.50 IOPS, 133.03 MiB/s [2024-12-05T19:34:23.551Z] 34133.33 IOPS, 133.33 MiB/s [2024-12-05T19:34:24.936Z] 34536.00 IOPS, 134.91 MiB/s 00:16:05.930 Latency(us) 00:16:05.930 [2024-12-05T19:34:24.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.930 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:05.930 xnvme_bdev : 5.00 34530.67 134.89 0.00 0.00 1848.97 259.94 10284.11 00:16:05.930 [2024-12-05T19:34:24.936Z] =================================================================================================================== 00:16:05.930 [2024-12-05T19:34:24.936Z] Total : 34530.67 134.89 0.00 0.00 1848.97 259.94 10284.11 00:16:06.499 00:16:06.499 real 0m12.998s 00:16:06.499 user 0m5.482s 00:16:06.499 sys 0m5.802s 00:16:06.499 19:34:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:06.499 19:34:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:06.499 ************************************ 00:16:06.499 END TEST xnvme_bdevperf 00:16:06.499 ************************************ 00:16:06.499 19:34:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:06.499 19:34:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:06.499 19:34:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:06.499 19:34:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:06.499 ************************************ 00:16:06.499 START TEST xnvme_fio_plugin 00:16:06.499 ************************************ 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:06.499 19:34:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:06.499 { 00:16:06.499 "subsystems": [ 00:16:06.499 { 00:16:06.499 "subsystem": "bdev", 00:16:06.499 "config": [ 00:16:06.499 { 00:16:06.499 "params": { 00:16:06.499 "io_mechanism": "libaio", 00:16:06.499 "conserve_cpu": false, 00:16:06.499 "filename": "/dev/nvme0n1", 00:16:06.499 "name": "xnvme_bdev" 00:16:06.499 }, 00:16:06.499 "method": "bdev_xnvme_create" 00:16:06.500 }, 00:16:06.500 { 00:16:06.500 "method": "bdev_wait_for_examine" 00:16:06.500 } 00:16:06.500 ] 00:16:06.500 } 00:16:06.500 ] 00:16:06.500 } 00:16:06.760 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:06.760 fio-3.35 00:16:06.760 Starting 1 thread 00:16:13.341 00:16:13.341 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69482: Thu Dec 5 19:34:31 2024 00:16:13.341 read: IOPS=35.4k, BW=138MiB/s (145MB/s)(691MiB/5001msec) 00:16:13.341 slat (usec): min=4, max=1914, avg=14.78, stdev=80.04 00:16:13.341 clat (usec): min=124, max=6703, avg=1386.90, stdev=464.28 00:16:13.341 lat (usec): min=223, max=6708, avg=1401.68, stdev=456.87 00:16:13.341 clat percentiles (usec): 00:16:13.341 | 1.00th=[ 371], 5.00th=[ 709], 10.00th=[ 848], 20.00th=[ 1020], 00:16:13.341 | 30.00th=[ 1139], 40.00th=[ 1254], 50.00th=[ 1352], 60.00th=[ 1467], 00:16:13.341 | 70.00th=[ 1598], 80.00th=[ 1729], 90.00th=[ 1942], 95.00th=[ 2147], 00:16:13.341 | 99.00th=[ 2671], 99.50th=[ 3064], 99.90th=[ 3916], 99.95th=[ 4555], 00:16:13.341 | 99.99th=[ 5145] 00:16:13.341 bw ( KiB/s): min=130688, max=148512, per=100.00%, avg=142065.78, stdev=5294.11, samples=9 00:16:13.341 iops : min=32672, max=37128, avg=35516.44, stdev=1323.53, samples=9 00:16:13.341 lat (usec) : 250=0.31%, 500=1.80%, 750=3.96%, 1000=12.74% 00:16:13.341 lat (msec) : 2=72.98%, 4=8.12%, 10=0.09% 00:16:13.341 cpu : usr=60.88%, sys=32.30%, ctx=19, majf=0, minf=764 00:16:13.341 IO depths : 1=0.9%, 2=2.0%, 4=4.3%, 8=9.6%, 16=22.7%, 32=58.6%, >=64=2.0% 00:16:13.341 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:13.341 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.5%, >=64=0.0% 00:16:13.341 issued rwts: total=176960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:13.341 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:13.341 00:16:13.341 Run status group 0 (all jobs): 00:16:13.341 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=691MiB (725MB), run=5001-5001msec 00:16:13.341 ----------------------------------------------------- 00:16:13.341 Suppressions used: 00:16:13.341 count bytes template 00:16:13.341 1 11 /usr/src/fio/parse.c 00:16:13.341 1 8 libtcmalloc_minimal.so 00:16:13.341 1 904 libcrypto.so 00:16:13.341 ----------------------------------------------------- 00:16:13.341 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:13.341 19:34:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:13.598 { 00:16:13.598 "subsystems": [ 00:16:13.598 { 00:16:13.598 "subsystem": "bdev", 00:16:13.598 "config": [ 00:16:13.598 { 00:16:13.598 "params": { 00:16:13.598 "io_mechanism": "libaio", 00:16:13.598 "conserve_cpu": false, 00:16:13.598 "filename": "/dev/nvme0n1", 00:16:13.598 "name": "xnvme_bdev" 00:16:13.598 }, 00:16:13.598 "method": "bdev_xnvme_create" 00:16:13.598 }, 00:16:13.598 { 00:16:13.598 "method": "bdev_wait_for_examine" 00:16:13.598 } 00:16:13.598 ] 00:16:13.598 } 00:16:13.598 ] 00:16:13.598 } 00:16:13.598 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:13.598 fio-3.35 00:16:13.598 Starting 1 thread 00:16:20.168 00:16:20.168 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69574: Thu Dec 5 19:34:38 2024 00:16:20.168 write: IOPS=37.7k, BW=147MiB/s (154MB/s)(736MiB/5001msec); 0 zone resets 00:16:20.168 slat (usec): min=4, max=1991, avg=17.05, stdev=74.26 00:16:20.168 clat (usec): min=106, max=7662, avg=1227.32, stdev=494.94 00:16:20.168 lat (usec): min=177, max=7666, avg=1244.38, stdev=489.18 00:16:20.168 clat percentiles (usec): 00:16:20.168 | 1.00th=[ 273], 5.00th=[ 490], 10.00th=[ 635], 20.00th=[ 816], 00:16:20.169 | 30.00th=[ 955], 40.00th=[ 1074], 50.00th=[ 1188], 60.00th=[ 1319], 00:16:20.169 | 70.00th=[ 1450], 80.00th=[ 1598], 90.00th=[ 1827], 95.00th=[ 2040], 00:16:20.169 | 99.00th=[ 2704], 99.50th=[ 2999], 99.90th=[ 3589], 99.95th=[ 3916], 00:16:20.169 | 99.99th=[ 4490] 00:16:20.169 bw ( KiB/s): min=140424, max=161416, per=100.00%, avg=151130.67, stdev=6754.73, samples=9 00:16:20.169 iops : min=35106, max=40354, avg=37782.67, stdev=1688.68, samples=9 00:16:20.169 lat (usec) : 250=0.72%, 500=4.58%, 750=10.59%, 1000=17.77% 00:16:20.169 lat (msec) : 2=60.52%, 4=5.78%, 10=0.04% 00:16:20.169 cpu : usr=49.46%, sys=41.50%, ctx=9, majf=0, minf=765 00:16:20.169 IO depths : 1=0.6%, 2=1.4%, 4=3.4%, 8=8.5%, 16=22.9%, 32=61.2%, >=64=2.1% 00:16:20.169 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.169 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:16:20.169 issued rwts: total=0,188521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.169 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:20.169 00:16:20.169 Run status group 0 (all jobs): 00:16:20.169 WRITE: bw=147MiB/s (154MB/s), 147MiB/s-147MiB/s (154MB/s-154MB/s), io=736MiB (772MB), run=5001-5001msec 00:16:20.169 ----------------------------------------------------- 00:16:20.169 Suppressions used: 00:16:20.169 count bytes template 00:16:20.169 1 11 /usr/src/fio/parse.c 00:16:20.169 1 8 libtcmalloc_minimal.so 00:16:20.169 1 904 libcrypto.so 00:16:20.169 ----------------------------------------------------- 00:16:20.169 00:16:20.169 00:16:20.169 real 0m13.777s 00:16:20.169 user 0m8.348s 00:16:20.169 sys 0m4.243s 00:16:20.169 19:34:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.169 ************************************ 00:16:20.169 END TEST xnvme_fio_plugin 00:16:20.169 ************************************ 00:16:20.169 19:34:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:20.430 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:20.430 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:20.430 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:20.430 19:34:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:20.430 19:34:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.430 19:34:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.430 19:34:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.430 ************************************ 00:16:20.430 START TEST xnvme_rpc 00:16:20.430 ************************************ 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:20.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=69660 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 69660 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 69660 ']' 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.430 19:34:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.430 [2024-12-05 19:34:39.325290] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:16:20.430 [2024-12-05 19:34:39.325684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69660 ] 00:16:20.690 [2024-12-05 19:34:39.487973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.690 [2024-12-05 19:34:39.623274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.635 xnvme_bdev 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 69660 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 69660 ']' 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 69660 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69660 00:16:21.635 killing process with pid 69660 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69660' 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 69660 00:16:21.635 19:34:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 69660 00:16:23.548 00:16:23.548 real 0m2.971s 00:16:23.548 user 0m2.952s 00:16:23.548 sys 0m0.488s 00:16:23.548 19:34:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.548 ************************************ 00:16:23.548 END TEST xnvme_rpc 00:16:23.548 ************************************ 00:16:23.548 19:34:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:23.548 19:34:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:23.548 19:34:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:23.548 19:34:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.548 19:34:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:23.548 ************************************ 00:16:23.548 START TEST xnvme_bdevperf 00:16:23.548 ************************************ 00:16:23.548 19:34:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:23.548 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:23.548 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:16:23.548 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:23.548 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:23.548 19:34:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:23.548 19:34:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:23.548 19:34:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:23.548 { 00:16:23.548 "subsystems": [ 00:16:23.548 { 00:16:23.548 "subsystem": "bdev", 00:16:23.548 "config": [ 00:16:23.548 { 00:16:23.548 "params": { 00:16:23.548 "io_mechanism": "libaio", 00:16:23.548 "conserve_cpu": true, 00:16:23.548 "filename": "/dev/nvme0n1", 00:16:23.548 "name": "xnvme_bdev" 00:16:23.548 }, 00:16:23.548 "method": "bdev_xnvme_create" 00:16:23.548 }, 00:16:23.548 { 00:16:23.548 "method": "bdev_wait_for_examine" 00:16:23.548 } 00:16:23.548 ] 00:16:23.548 } 00:16:23.548 ] 00:16:23.548 } 00:16:23.548 [2024-12-05 19:34:42.345837] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:16:23.548 [2024-12-05 19:34:42.345975] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69733 ] 00:16:23.548 [2024-12-05 19:34:42.512012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.809 [2024-12-05 19:34:42.636475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.069 Running I/O for 5 seconds... 00:16:25.945 29393.00 IOPS, 114.82 MiB/s [2024-12-05T19:34:46.335Z] 32093.00 IOPS, 125.36 MiB/s [2024-12-05T19:34:47.280Z] 32652.67 IOPS, 127.55 MiB/s [2024-12-05T19:34:48.222Z] 33326.25 IOPS, 130.18 MiB/s [2024-12-05T19:34:48.222Z] 32524.00 IOPS, 127.05 MiB/s 00:16:29.216 Latency(us) 00:16:29.216 [2024-12-05T19:34:48.222Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.216 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:29.217 xnvme_bdev : 5.01 32486.32 126.90 0.00 0.00 1965.48 356.04 11443.59 00:16:29.217 [2024-12-05T19:34:48.223Z] =================================================================================================================== 00:16:29.217 [2024-12-05T19:34:48.223Z] Total : 32486.32 126.90 0.00 0.00 1965.48 356.04 11443.59 00:16:29.832 19:34:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:29.832 19:34:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:29.832 19:34:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:29.832 19:34:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:29.832 19:34:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:30.106 { 00:16:30.106 "subsystems": [ 00:16:30.106 { 00:16:30.106 "subsystem": "bdev", 00:16:30.106 "config": [ 00:16:30.106 { 00:16:30.106 "params": { 00:16:30.106 "io_mechanism": "libaio", 00:16:30.106 "conserve_cpu": true, 00:16:30.106 "filename": "/dev/nvme0n1", 00:16:30.106 "name": "xnvme_bdev" 00:16:30.106 }, 00:16:30.106 "method": "bdev_xnvme_create" 00:16:30.106 }, 00:16:30.106 { 00:16:30.106 "method": "bdev_wait_for_examine" 00:16:30.106 } 00:16:30.106 ] 00:16:30.106 } 00:16:30.106 ] 00:16:30.106 } 00:16:30.106 [2024-12-05 19:34:48.846635] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:16:30.106 [2024-12-05 19:34:48.846772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69805 ] 00:16:30.106 [2024-12-05 19:34:49.012372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.367 [2024-12-05 19:34:49.139752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.627 Running I/O for 5 seconds... 00:16:32.507 4463.00 IOPS, 17.43 MiB/s [2024-12-05T19:34:52.453Z] 9800.00 IOPS, 38.28 MiB/s [2024-12-05T19:34:53.836Z] 7757.00 IOPS, 30.30 MiB/s [2024-12-05T19:34:54.781Z] 7391.25 IOPS, 28.87 MiB/s [2024-12-05T19:34:54.781Z] 9741.80 IOPS, 38.05 MiB/s 00:16:35.775 Latency(us) 00:16:35.775 [2024-12-05T19:34:54.781Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.775 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:35.775 xnvme_bdev : 5.02 9716.99 37.96 0.00 0.00 6568.79 64.98 38918.30 00:16:35.775 [2024-12-05T19:34:54.781Z] =================================================================================================================== 00:16:35.775 [2024-12-05T19:34:54.781Z] Total : 9716.99 37.96 0.00 0.00 6568.79 64.98 38918.30 00:16:36.346 00:16:36.346 real 0m13.004s 00:16:36.346 user 0m7.930s 00:16:36.346 sys 0m3.926s 00:16:36.346 19:34:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:36.346 19:34:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:36.346 ************************************ 00:16:36.346 END TEST xnvme_bdevperf 00:16:36.346 ************************************ 00:16:36.346 19:34:55 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:36.346 19:34:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:36.346 19:34:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:36.346 19:34:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:36.346 ************************************ 00:16:36.346 START TEST xnvme_fio_plugin 00:16:36.346 ************************************ 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:36.346 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:36.607 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:36.607 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:36.607 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:36.607 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:36.607 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:36.607 19:34:55 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:36.607 { 00:16:36.607 "subsystems": [ 00:16:36.607 { 00:16:36.607 "subsystem": "bdev", 00:16:36.607 "config": [ 00:16:36.607 { 00:16:36.607 "params": { 00:16:36.607 "io_mechanism": "libaio", 00:16:36.607 "conserve_cpu": true, 00:16:36.607 "filename": "/dev/nvme0n1", 00:16:36.607 "name": "xnvme_bdev" 00:16:36.607 }, 00:16:36.607 "method": "bdev_xnvme_create" 00:16:36.607 }, 00:16:36.607 { 00:16:36.607 "method": "bdev_wait_for_examine" 00:16:36.607 } 00:16:36.607 ] 00:16:36.607 } 00:16:36.607 ] 00:16:36.607 } 00:16:36.607 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:36.607 fio-3.35 00:16:36.607 Starting 1 thread 00:16:43.192 00:16:43.192 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=69929: Thu Dec 5 19:35:01 2024 00:16:43.192 read: IOPS=36.8k, BW=144MiB/s (151MB/s)(720MiB/5003msec) 00:16:43.192 slat (usec): min=4, max=2217, avg=16.06, stdev=81.11 00:16:43.192 clat (usec): min=106, max=8107, avg=1292.26, stdev=461.76 00:16:43.192 lat (usec): min=200, max=8111, avg=1308.32, stdev=454.08 00:16:43.192 clat percentiles (usec): 00:16:43.192 | 1.00th=[ 314], 5.00th=[ 594], 10.00th=[ 758], 20.00th=[ 922], 00:16:43.192 | 30.00th=[ 1045], 40.00th=[ 1156], 50.00th=[ 1270], 60.00th=[ 1369], 00:16:43.192 | 70.00th=[ 1500], 80.00th=[ 1647], 90.00th=[ 1844], 95.00th=[ 2057], 00:16:43.192 | 99.00th=[ 2638], 99.50th=[ 2966], 99.90th=[ 3556], 99.95th=[ 3818], 00:16:43.192 | 99.99th=[ 4424] 00:16:43.192 bw ( KiB/s): min=139336, max=155176, per=100.00%, avg=148601.00, stdev=5107.28, samples=9 00:16:43.192 iops : min=34834, max=38794, avg=37150.22, stdev=1276.80, samples=9 00:16:43.192 lat (usec) : 250=0.45%, 500=2.82%, 750=6.28%, 1000=16.68% 00:16:43.192 lat (msec) : 2=67.86%, 4=5.89%, 10=0.03% 00:16:43.192 cpu : usr=54.52%, sys=38.14%, ctx=59, majf=0, minf=764 00:16:43.192 IO depths : 1=0.8%, 2=1.7%, 4=3.8%, 8=9.1%, 16=22.7%, 32=59.8%, >=64=2.1% 00:16:43.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.192 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:16:43.192 issued rwts: total=184341,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.192 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:43.192 00:16:43.192 Run status group 0 (all jobs): 00:16:43.192 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=720MiB (755MB), run=5003-5003msec 00:16:43.452 ----------------------------------------------------- 00:16:43.452 Suppressions used: 00:16:43.452 count bytes template 00:16:43.452 1 11 /usr/src/fio/parse.c 00:16:43.452 1 8 libtcmalloc_minimal.so 00:16:43.452 1 904 libcrypto.so 00:16:43.452 ----------------------------------------------------- 00:16:43.452 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:43.452 19:35:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:43.452 { 00:16:43.452 "subsystems": [ 00:16:43.452 { 00:16:43.452 "subsystem": "bdev", 00:16:43.452 "config": [ 00:16:43.452 { 00:16:43.452 "params": { 00:16:43.452 "io_mechanism": "libaio", 00:16:43.452 "conserve_cpu": true, 00:16:43.452 "filename": "/dev/nvme0n1", 00:16:43.452 "name": "xnvme_bdev" 00:16:43.452 }, 00:16:43.452 "method": "bdev_xnvme_create" 00:16:43.452 }, 00:16:43.452 { 00:16:43.452 "method": "bdev_wait_for_examine" 00:16:43.452 } 00:16:43.452 ] 00:16:43.452 } 00:16:43.452 ] 00:16:43.452 } 00:16:43.709 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:43.709 fio-3.35 00:16:43.709 Starting 1 thread 00:16:50.293 00:16:50.293 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70020: Thu Dec 5 19:35:08 2024 00:16:50.293 write: IOPS=37.0k, BW=145MiB/s (152MB/s)(723MiB/5001msec); 0 zone resets 00:16:50.293 slat (usec): min=4, max=1865, avg=17.38, stdev=80.50 00:16:50.293 clat (usec): min=77, max=10025, avg=1254.46, stdev=482.08 00:16:50.293 lat (usec): min=92, max=10029, avg=1271.85, stdev=475.65 00:16:50.293 clat percentiles (usec): 00:16:50.293 | 1.00th=[ 306], 5.00th=[ 545], 10.00th=[ 701], 20.00th=[ 881], 00:16:50.293 | 30.00th=[ 1004], 40.00th=[ 1106], 50.00th=[ 1221], 60.00th=[ 1319], 00:16:50.293 | 70.00th=[ 1434], 80.00th=[ 1582], 90.00th=[ 1811], 95.00th=[ 2073], 00:16:50.293 | 99.00th=[ 2704], 99.50th=[ 3064], 99.90th=[ 3818], 99.95th=[ 4015], 00:16:50.293 | 99.99th=[ 7111] 00:16:50.293 bw ( KiB/s): min=139072, max=156248, per=99.69%, avg=147618.67, stdev=6029.99, samples=9 00:16:50.293 iops : min=34768, max=39062, avg=36904.67, stdev=1507.50, samples=9 00:16:50.293 lat (usec) : 100=0.01%, 250=0.50%, 500=3.56%, 750=7.77%, 1000=17.69% 00:16:50.293 lat (msec) : 2=64.49%, 4=5.94%, 10=0.05%, 20=0.01% 00:16:50.293 cpu : usr=50.38%, sys=41.14%, ctx=9, majf=0, minf=765 00:16:50.293 IO depths : 1=0.7%, 2=1.5%, 4=3.4%, 8=8.8%, 16=23.0%, 32=60.5%, >=64=2.2% 00:16:50.293 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.293 complete : 0=0.0%, 4=97.9%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:16:50.293 issued rwts: total=0,185136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.293 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.293 00:16:50.293 Run status group 0 (all jobs): 00:16:50.293 WRITE: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=723MiB (758MB), run=5001-5001msec 00:16:50.293 ----------------------------------------------------- 00:16:50.293 Suppressions used: 00:16:50.293 count bytes template 00:16:50.293 1 11 /usr/src/fio/parse.c 00:16:50.293 1 8 libtcmalloc_minimal.so 00:16:50.293 1 904 libcrypto.so 00:16:50.293 ----------------------------------------------------- 00:16:50.293 00:16:50.293 00:16:50.293 real 0m13.887s 00:16:50.293 user 0m8.077s 00:16:50.293 sys 0m4.602s 00:16:50.293 19:35:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.293 19:35:09 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:50.294 ************************************ 00:16:50.294 END TEST xnvme_fio_plugin 00:16:50.294 ************************************ 00:16:50.294 19:35:09 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:50.294 19:35:09 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:16:50.294 19:35:09 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:16:50.294 19:35:09 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:16:50.294 19:35:09 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:50.294 19:35:09 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:50.294 19:35:09 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:50.294 19:35:09 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:50.294 19:35:09 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:50.294 19:35:09 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:50.294 19:35:09 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.294 19:35:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:50.555 ************************************ 00:16:50.555 START TEST xnvme_rpc 00:16:50.555 ************************************ 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70102 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70102 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70102 ']' 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.555 19:35:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:50.555 [2024-12-05 19:35:09.395202] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:16:50.555 [2024-12-05 19:35:09.395349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70102 ] 00:16:50.555 [2024-12-05 19:35:09.557651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.817 [2024-12-05 19:35:09.682958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.387 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:51.387 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:51.387 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:16:51.387 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.387 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.654 xnvme_bdev 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70102 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70102 ']' 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70102 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70102 00:16:51.654 killing process with pid 70102 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70102' 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70102 00:16:51.654 19:35:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70102 00:16:53.622 00:16:53.622 real 0m2.911s 00:16:53.622 user 0m2.909s 00:16:53.622 sys 0m0.474s 00:16:53.622 19:35:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:53.622 ************************************ 00:16:53.622 END TEST xnvme_rpc 00:16:53.622 ************************************ 00:16:53.622 19:35:12 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.622 19:35:12 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:53.622 19:35:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:53.622 19:35:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:53.622 19:35:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:53.622 ************************************ 00:16:53.622 START TEST xnvme_bdevperf 00:16:53.622 ************************************ 00:16:53.622 19:35:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:53.622 19:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:53.622 19:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:53.622 19:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:53.622 19:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:53.622 19:35:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:53.622 19:35:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:53.622 19:35:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:53.622 { 00:16:53.622 "subsystems": [ 00:16:53.622 { 00:16:53.622 "subsystem": "bdev", 00:16:53.622 "config": [ 00:16:53.622 { 00:16:53.622 "params": { 00:16:53.622 "io_mechanism": "io_uring", 00:16:53.622 "conserve_cpu": false, 00:16:53.622 "filename": "/dev/nvme0n1", 00:16:53.622 "name": "xnvme_bdev" 00:16:53.622 }, 00:16:53.622 "method": "bdev_xnvme_create" 00:16:53.622 }, 00:16:53.622 { 00:16:53.622 "method": "bdev_wait_for_examine" 00:16:53.622 } 00:16:53.622 ] 00:16:53.622 } 00:16:53.622 ] 00:16:53.622 } 00:16:53.622 [2024-12-05 19:35:12.356533] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:16:53.622 [2024-12-05 19:35:12.356680] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70176 ] 00:16:53.622 [2024-12-05 19:35:12.519799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.883 [2024-12-05 19:35:12.642940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.143 Running I/O for 5 seconds... 00:16:56.025 31260.00 IOPS, 122.11 MiB/s [2024-12-05T19:35:15.969Z] 31790.00 IOPS, 124.18 MiB/s [2024-12-05T19:35:17.353Z] 32123.00 IOPS, 125.48 MiB/s [2024-12-05T19:35:18.298Z] 32003.25 IOPS, 125.01 MiB/s 00:16:59.292 Latency(us) 00:16:59.292 [2024-12-05T19:35:18.298Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.292 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:59.292 xnvme_bdev : 5.00 32643.58 127.51 0.00 0.00 1955.65 293.02 11947.72 00:16:59.292 [2024-12-05T19:35:18.298Z] =================================================================================================================== 00:16:59.292 [2024-12-05T19:35:18.298Z] Total : 32643.58 127.51 0.00 0.00 1955.65 293.02 11947.72 00:16:59.864 19:35:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:59.864 19:35:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:59.864 19:35:18 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:59.864 19:35:18 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:59.864 19:35:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:59.864 { 00:16:59.864 "subsystems": [ 00:16:59.864 { 00:16:59.864 "subsystem": "bdev", 00:16:59.864 "config": [ 00:16:59.864 { 00:16:59.864 "params": { 00:16:59.864 "io_mechanism": "io_uring", 00:16:59.864 "conserve_cpu": false, 00:16:59.864 "filename": "/dev/nvme0n1", 00:16:59.864 "name": "xnvme_bdev" 00:16:59.864 }, 00:16:59.864 "method": "bdev_xnvme_create" 00:16:59.864 }, 00:16:59.864 { 00:16:59.864 "method": "bdev_wait_for_examine" 00:16:59.864 } 00:16:59.864 ] 00:16:59.864 } 00:16:59.864 ] 00:16:59.864 } 00:16:59.864 [2024-12-05 19:35:18.805945] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:16:59.864 [2024-12-05 19:35:18.806091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70257 ] 00:17:00.125 [2024-12-05 19:35:18.969389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.125 [2024-12-05 19:35:19.093927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.386 Running I/O for 5 seconds... 00:17:02.712 7403.00 IOPS, 28.92 MiB/s [2024-12-05T19:35:22.659Z] 9652.00 IOPS, 37.70 MiB/s [2024-12-05T19:35:23.601Z] 8270.00 IOPS, 32.30 MiB/s [2024-12-05T19:35:24.544Z] 9026.00 IOPS, 35.26 MiB/s [2024-12-05T19:35:24.544Z] 8313.40 IOPS, 32.47 MiB/s 00:17:05.538 Latency(us) 00:17:05.538 [2024-12-05T19:35:24.544Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.538 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:05.538 xnvme_bdev : 5.01 8305.46 32.44 0.00 0.00 7692.87 57.50 34280.37 00:17:05.538 [2024-12-05T19:35:24.544Z] =================================================================================================================== 00:17:05.538 [2024-12-05T19:35:24.544Z] Total : 8305.46 32.44 0.00 0.00 7692.87 57.50 34280.37 00:17:06.483 00:17:06.483 real 0m12.914s 00:17:06.483 user 0m5.850s 00:17:06.483 sys 0m6.797s 00:17:06.483 19:35:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.483 ************************************ 00:17:06.483 END TEST xnvme_bdevperf 00:17:06.483 ************************************ 00:17:06.483 19:35:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:06.483 19:35:25 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:06.483 19:35:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:06.483 19:35:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.483 19:35:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:06.483 ************************************ 00:17:06.483 START TEST xnvme_fio_plugin 00:17:06.483 ************************************ 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:06.483 19:35:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:06.483 { 00:17:06.483 "subsystems": [ 00:17:06.483 { 00:17:06.483 "subsystem": "bdev", 00:17:06.483 "config": [ 00:17:06.483 { 00:17:06.483 "params": { 00:17:06.483 "io_mechanism": "io_uring", 00:17:06.483 "conserve_cpu": false, 00:17:06.483 "filename": "/dev/nvme0n1", 00:17:06.483 "name": "xnvme_bdev" 00:17:06.483 }, 00:17:06.483 "method": "bdev_xnvme_create" 00:17:06.483 }, 00:17:06.483 { 00:17:06.483 "method": "bdev_wait_for_examine" 00:17:06.483 } 00:17:06.483 ] 00:17:06.483 } 00:17:06.483 ] 00:17:06.483 } 00:17:06.483 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:06.483 fio-3.35 00:17:06.483 Starting 1 thread 00:17:13.115 00:17:13.115 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70371: Thu Dec 5 19:35:31 2024 00:17:13.115 read: IOPS=33.5k, BW=131MiB/s (137MB/s)(655MiB/5002msec) 00:17:13.115 slat (nsec): min=2877, max=78675, avg=4333.14, stdev=2634.61 00:17:13.115 clat (usec): min=1048, max=3594, avg=1733.48, stdev=294.30 00:17:13.115 lat (usec): min=1051, max=3609, avg=1737.81, stdev=295.06 00:17:13.115 clat percentiles (usec): 00:17:13.115 | 1.00th=[ 1254], 5.00th=[ 1352], 10.00th=[ 1418], 20.00th=[ 1500], 00:17:13.115 | 30.00th=[ 1565], 40.00th=[ 1614], 50.00th=[ 1680], 60.00th=[ 1745], 00:17:13.115 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2278], 00:17:13.115 | 99.00th=[ 2638], 99.50th=[ 2769], 99.90th=[ 3097], 99.95th=[ 3326], 00:17:13.115 | 99.99th=[ 3523] 00:17:13.115 bw ( KiB/s): min=128512, max=138240, per=99.89%, avg=133887.00, stdev=3487.18, samples=9 00:17:13.115 iops : min=32128, max=34560, avg=33471.67, stdev=871.84, samples=9 00:17:13.115 lat (msec) : 2=83.57%, 4=16.43% 00:17:13.115 cpu : usr=31.75%, sys=66.71%, ctx=11, majf=0, minf=762 00:17:13.115 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:13.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:13.115 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:13.115 issued rwts: total=167616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:13.115 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:13.115 00:17:13.115 Run status group 0 (all jobs): 00:17:13.115 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=655MiB (687MB), run=5002-5002msec 00:17:13.376 ----------------------------------------------------- 00:17:13.376 Suppressions used: 00:17:13.376 count bytes template 00:17:13.376 1 11 /usr/src/fio/parse.c 00:17:13.376 1 8 libtcmalloc_minimal.so 00:17:13.376 1 904 libcrypto.so 00:17:13.376 ----------------------------------------------------- 00:17:13.376 00:17:13.376 19:35:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:13.376 19:35:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:13.377 19:35:32 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:13.377 { 00:17:13.377 "subsystems": [ 00:17:13.377 { 00:17:13.377 "subsystem": "bdev", 00:17:13.377 "config": [ 00:17:13.377 { 00:17:13.377 "params": { 00:17:13.377 "io_mechanism": "io_uring", 00:17:13.377 "conserve_cpu": false, 00:17:13.377 "filename": "/dev/nvme0n1", 00:17:13.377 "name": "xnvme_bdev" 00:17:13.377 }, 00:17:13.377 "method": "bdev_xnvme_create" 00:17:13.377 }, 00:17:13.377 { 00:17:13.377 "method": "bdev_wait_for_examine" 00:17:13.377 } 00:17:13.377 ] 00:17:13.377 } 00:17:13.377 ] 00:17:13.377 } 00:17:13.638 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:13.638 fio-3.35 00:17:13.638 Starting 1 thread 00:17:20.234 00:17:20.234 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70468: Thu Dec 5 19:35:38 2024 00:17:20.234 write: IOPS=33.0k, BW=129MiB/s (135MB/s)(645MiB/5002msec); 0 zone resets 00:17:20.234 slat (usec): min=2, max=103, avg= 4.38, stdev= 2.60 00:17:20.234 clat (usec): min=701, max=7031, avg=1760.59, stdev=292.59 00:17:20.234 lat (usec): min=707, max=7034, avg=1764.97, stdev=293.08 00:17:20.234 clat percentiles (usec): 00:17:20.234 | 1.00th=[ 1237], 5.00th=[ 1352], 10.00th=[ 1434], 20.00th=[ 1516], 00:17:20.234 | 30.00th=[ 1598], 40.00th=[ 1663], 50.00th=[ 1729], 60.00th=[ 1795], 00:17:20.234 | 70.00th=[ 1876], 80.00th=[ 1991], 90.00th=[ 2147], 95.00th=[ 2278], 00:17:20.234 | 99.00th=[ 2606], 99.50th=[ 2737], 99.90th=[ 3163], 99.95th=[ 3589], 00:17:20.234 | 99.99th=[ 4621] 00:17:20.234 bw ( KiB/s): min=123608, max=138544, per=100.00%, avg=132325.33, stdev=5159.96, samples=9 00:17:20.234 iops : min=30902, max=34636, avg=33081.33, stdev=1289.99, samples=9 00:17:20.234 lat (usec) : 750=0.01%, 1000=0.01% 00:17:20.234 lat (msec) : 2=81.38%, 4=18.59%, 10=0.01% 00:17:20.234 cpu : usr=32.63%, sys=65.83%, ctx=13, majf=0, minf=763 00:17:20.234 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:17:20.234 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:20.234 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:20.234 issued rwts: total=0,165045,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:20.234 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:20.234 00:17:20.234 Run status group 0 (all jobs): 00:17:20.234 WRITE: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=645MiB (676MB), run=5002-5002msec 00:17:20.234 ----------------------------------------------------- 00:17:20.234 Suppressions used: 00:17:20.234 count bytes template 00:17:20.234 1 11 /usr/src/fio/parse.c 00:17:20.234 1 8 libtcmalloc_minimal.so 00:17:20.234 1 904 libcrypto.so 00:17:20.234 ----------------------------------------------------- 00:17:20.234 00:17:20.234 00:17:20.234 real 0m13.854s 00:17:20.234 user 0m6.172s 00:17:20.234 sys 0m7.196s 00:17:20.234 19:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.234 ************************************ 00:17:20.234 END TEST xnvme_fio_plugin 00:17:20.234 ************************************ 00:17:20.234 19:35:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:20.234 19:35:39 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:20.234 19:35:39 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:20.234 19:35:39 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:20.234 19:35:39 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:20.234 19:35:39 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:20.234 19:35:39 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.234 19:35:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:20.234 ************************************ 00:17:20.234 START TEST xnvme_rpc 00:17:20.234 ************************************ 00:17:20.234 19:35:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:20.234 19:35:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:20.234 19:35:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:20.234 19:35:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:20.234 19:35:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:20.234 19:35:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70549 00:17:20.234 19:35:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70549 00:17:20.234 19:35:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70549 ']' 00:17:20.235 19:35:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.235 19:35:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.235 19:35:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.235 19:35:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.235 19:35:39 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.235 19:35:39 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:20.495 [2024-12-05 19:35:39.281013] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:17:20.495 [2024-12-05 19:35:39.281186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70549 ] 00:17:20.495 [2024-12-05 19:35:39.443104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.756 [2024-12-05 19:35:39.575521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.327 xnvme_bdev 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.327 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.588 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:21.588 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:21.588 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70549 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70549 ']' 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70549 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70549 00:17:21.589 killing process with pid 70549 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70549' 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70549 00:17:21.589 19:35:40 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70549 00:17:23.507 00:17:23.507 real 0m2.946s 00:17:23.507 user 0m2.947s 00:17:23.507 sys 0m0.488s 00:17:23.507 19:35:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.507 ************************************ 00:17:23.507 END TEST xnvme_rpc 00:17:23.507 ************************************ 00:17:23.507 19:35:42 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.507 19:35:42 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:23.507 19:35:42 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:23.507 19:35:42 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.507 19:35:42 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:23.507 ************************************ 00:17:23.507 START TEST xnvme_bdevperf 00:17:23.507 ************************************ 00:17:23.507 19:35:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:23.507 19:35:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:23.507 19:35:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:17:23.507 19:35:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:23.507 19:35:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:23.507 19:35:42 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:23.507 19:35:42 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:23.507 19:35:42 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:23.507 { 00:17:23.507 "subsystems": [ 00:17:23.507 { 00:17:23.507 "subsystem": "bdev", 00:17:23.507 "config": [ 00:17:23.507 { 00:17:23.507 "params": { 00:17:23.507 "io_mechanism": "io_uring", 00:17:23.507 "conserve_cpu": true, 00:17:23.507 "filename": "/dev/nvme0n1", 00:17:23.507 "name": "xnvme_bdev" 00:17:23.507 }, 00:17:23.507 "method": "bdev_xnvme_create" 00:17:23.507 }, 00:17:23.507 { 00:17:23.507 "method": "bdev_wait_for_examine" 00:17:23.507 } 00:17:23.507 ] 00:17:23.507 } 00:17:23.507 ] 00:17:23.507 } 00:17:23.507 [2024-12-05 19:35:42.278094] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:17:23.507 [2024-12-05 19:35:42.278275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70623 ] 00:17:23.507 [2024-12-05 19:35:42.444089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.767 [2024-12-05 19:35:42.565440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.025 Running I/O for 5 seconds... 00:17:25.934 45648.00 IOPS, 178.31 MiB/s [2024-12-05T19:35:45.878Z] 53817.50 IOPS, 210.22 MiB/s [2024-12-05T19:35:47.260Z] 55918.33 IOPS, 218.43 MiB/s [2024-12-05T19:35:47.830Z] 51890.75 IOPS, 202.70 MiB/s [2024-12-05T19:35:47.830Z] 48859.40 IOPS, 190.86 MiB/s 00:17:28.824 Latency(us) 00:17:28.824 [2024-12-05T19:35:47.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.824 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:28.824 xnvme_bdev : 5.01 48810.59 190.67 0.00 0.00 1306.23 57.90 14720.39 00:17:28.824 [2024-12-05T19:35:47.830Z] =================================================================================================================== 00:17:28.824 [2024-12-05T19:35:47.830Z] Total : 48810.59 190.67 0.00 0.00 1306.23 57.90 14720.39 00:17:29.767 19:35:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:29.767 19:35:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:29.767 19:35:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:29.767 19:35:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:29.767 19:35:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:29.767 { 00:17:29.767 "subsystems": [ 00:17:29.767 { 00:17:29.767 "subsystem": "bdev", 00:17:29.767 "config": [ 00:17:29.767 { 00:17:29.767 "params": { 00:17:29.767 "io_mechanism": "io_uring", 00:17:29.767 "conserve_cpu": true, 00:17:29.767 "filename": "/dev/nvme0n1", 00:17:29.767 "name": "xnvme_bdev" 00:17:29.767 }, 00:17:29.767 "method": "bdev_xnvme_create" 00:17:29.767 }, 00:17:29.767 { 00:17:29.767 "method": "bdev_wait_for_examine" 00:17:29.767 } 00:17:29.767 ] 00:17:29.767 } 00:17:29.767 ] 00:17:29.767 } 00:17:29.767 [2024-12-05 19:35:48.454661] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:17:29.767 [2024-12-05 19:35:48.454751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70698 ] 00:17:29.767 [2024-12-05 19:35:48.598615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.768 [2024-12-05 19:35:48.675046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.028 Running I/O for 5 seconds... 00:17:31.913 10181.00 IOPS, 39.77 MiB/s [2024-12-05T19:35:52.303Z] 13513.00 IOPS, 52.79 MiB/s [2024-12-05T19:35:52.873Z] 12482.33 IOPS, 48.76 MiB/s [2024-12-05T19:35:54.258Z] 13950.50 IOPS, 54.49 MiB/s [2024-12-05T19:35:54.259Z] 14470.40 IOPS, 56.52 MiB/s 00:17:35.253 Latency(us) 00:17:35.253 [2024-12-05T19:35:54.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.253 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:35.253 xnvme_bdev : 5.01 14458.64 56.48 0.00 0.00 4418.32 49.03 26012.75 00:17:35.253 [2024-12-05T19:35:54.259Z] =================================================================================================================== 00:17:35.253 [2024-12-05T19:35:54.259Z] Total : 14458.64 56.48 0.00 0.00 4418.32 49.03 26012.75 00:17:35.825 00:17:35.825 real 0m12.464s 00:17:35.825 user 0m7.933s 00:17:35.825 sys 0m3.615s 00:17:35.825 19:35:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.825 19:35:54 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:35.825 ************************************ 00:17:35.825 END TEST xnvme_bdevperf 00:17:35.825 ************************************ 00:17:35.826 19:35:54 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:35.826 19:35:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:35.826 19:35:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.826 19:35:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:35.826 ************************************ 00:17:35.826 START TEST xnvme_fio_plugin 00:17:35.826 ************************************ 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:35.826 19:35:54 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:35.826 { 00:17:35.826 "subsystems": [ 00:17:35.826 { 00:17:35.826 "subsystem": "bdev", 00:17:35.826 "config": [ 00:17:35.826 { 00:17:35.826 "params": { 00:17:35.826 "io_mechanism": "io_uring", 00:17:35.826 "conserve_cpu": true, 00:17:35.826 "filename": "/dev/nvme0n1", 00:17:35.826 "name": "xnvme_bdev" 00:17:35.826 }, 00:17:35.826 "method": "bdev_xnvme_create" 00:17:35.826 }, 00:17:35.826 { 00:17:35.826 "method": "bdev_wait_for_examine" 00:17:35.826 } 00:17:35.826 ] 00:17:35.826 } 00:17:35.826 ] 00:17:35.826 } 00:17:36.088 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:36.088 fio-3.35 00:17:36.088 Starting 1 thread 00:17:42.679 00:17:42.679 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70812: Thu Dec 5 19:36:00 2024 00:17:42.679 read: IOPS=33.3k, BW=130MiB/s (136MB/s)(651MiB/5002msec) 00:17:42.679 slat (usec): min=2, max=1133, avg= 4.09, stdev= 4.35 00:17:42.679 clat (usec): min=909, max=4019, avg=1753.53, stdev=294.33 00:17:42.679 lat (usec): min=913, max=4049, avg=1757.62, stdev=294.83 00:17:42.679 clat percentiles (usec): 00:17:42.680 | 1.00th=[ 1205], 5.00th=[ 1352], 10.00th=[ 1418], 20.00th=[ 1516], 00:17:42.680 | 30.00th=[ 1582], 40.00th=[ 1647], 50.00th=[ 1713], 60.00th=[ 1778], 00:17:42.680 | 70.00th=[ 1876], 80.00th=[ 1975], 90.00th=[ 2147], 95.00th=[ 2278], 00:17:42.680 | 99.00th=[ 2638], 99.50th=[ 2802], 99.90th=[ 3064], 99.95th=[ 3163], 00:17:42.680 | 99.99th=[ 3884] 00:17:42.680 bw ( KiB/s): min=129277, max=139776, per=100.00%, avg=133973.00, stdev=3365.24, samples=9 00:17:42.680 iops : min=32319, max=34944, avg=33493.22, stdev=841.35, samples=9 00:17:42.680 lat (usec) : 1000=0.06% 00:17:42.680 lat (msec) : 2=82.02%, 4=17.92%, 10=0.01% 00:17:42.680 cpu : usr=42.95%, sys=51.75%, ctx=53, majf=0, minf=762 00:17:42.680 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:42.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:42.680 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:42.680 issued rwts: total=166624,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:42.680 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:42.680 00:17:42.680 Run status group 0 (all jobs): 00:17:42.680 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=651MiB (682MB), run=5002-5002msec 00:17:42.680 ----------------------------------------------------- 00:17:42.680 Suppressions used: 00:17:42.680 count bytes template 00:17:42.680 1 11 /usr/src/fio/parse.c 00:17:42.680 1 8 libtcmalloc_minimal.so 00:17:42.680 1 904 libcrypto.so 00:17:42.680 ----------------------------------------------------- 00:17:42.680 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:42.680 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:42.941 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:42.941 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:42.941 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:42.941 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:42.941 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:42.941 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:42.941 19:36:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:42.941 { 00:17:42.941 "subsystems": [ 00:17:42.941 { 00:17:42.941 "subsystem": "bdev", 00:17:42.941 "config": [ 00:17:42.941 { 00:17:42.941 "params": { 00:17:42.941 "io_mechanism": "io_uring", 00:17:42.941 "conserve_cpu": true, 00:17:42.941 "filename": "/dev/nvme0n1", 00:17:42.941 "name": "xnvme_bdev" 00:17:42.941 }, 00:17:42.941 "method": "bdev_xnvme_create" 00:17:42.941 }, 00:17:42.941 { 00:17:42.941 "method": "bdev_wait_for_examine" 00:17:42.941 } 00:17:42.941 ] 00:17:42.941 } 00:17:42.941 ] 00:17:42.941 } 00:17:42.941 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:42.941 fio-3.35 00:17:42.941 Starting 1 thread 00:17:49.539 00:17:49.539 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70898: Thu Dec 5 19:36:07 2024 00:17:49.539 write: IOPS=34.5k, BW=135MiB/s (142MB/s)(675MiB/5001msec); 0 zone resets 00:17:49.539 slat (usec): min=2, max=105, avg= 4.11, stdev= 2.36 00:17:49.539 clat (usec): min=957, max=4606, avg=1684.52, stdev=283.67 00:17:49.539 lat (usec): min=960, max=4610, avg=1688.63, stdev=284.11 00:17:49.539 clat percentiles (usec): 00:17:49.539 | 1.00th=[ 1156], 5.00th=[ 1270], 10.00th=[ 1352], 20.00th=[ 1450], 00:17:49.539 | 30.00th=[ 1516], 40.00th=[ 1582], 50.00th=[ 1647], 60.00th=[ 1729], 00:17:49.539 | 70.00th=[ 1811], 80.00th=[ 1909], 90.00th=[ 2073], 95.00th=[ 2180], 00:17:49.539 | 99.00th=[ 2474], 99.50th=[ 2573], 99.90th=[ 2868], 99.95th=[ 2999], 00:17:49.539 | 99.99th=[ 3490] 00:17:49.539 bw ( KiB/s): min=132608, max=148904, per=100.00%, avg=138403.56, stdev=5271.99, samples=9 00:17:49.539 iops : min=33152, max=37226, avg=34600.89, stdev=1318.00, samples=9 00:17:49.539 lat (usec) : 1000=0.01% 00:17:49.539 lat (msec) : 2=86.35%, 4=13.64%, 10=0.01% 00:17:49.539 cpu : usr=44.22%, sys=51.42%, ctx=31, majf=0, minf=763 00:17:49.539 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:49.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:49.539 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:49.539 issued rwts: total=0,172780,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:49.539 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:49.539 00:17:49.539 Run status group 0 (all jobs): 00:17:49.539 WRITE: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=675MiB (708MB), run=5001-5001msec 00:17:49.800 ----------------------------------------------------- 00:17:49.800 Suppressions used: 00:17:49.800 count bytes template 00:17:49.800 1 11 /usr/src/fio/parse.c 00:17:49.800 1 8 libtcmalloc_minimal.so 00:17:49.800 1 904 libcrypto.so 00:17:49.800 ----------------------------------------------------- 00:17:49.800 00:17:49.800 00:17:49.800 real 0m13.831s 00:17:49.800 user 0m7.287s 00:17:49.800 sys 0m5.727s 00:17:49.800 19:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.800 ************************************ 00:17:49.800 END TEST xnvme_fio_plugin 00:17:49.800 ************************************ 00:17:49.800 19:36:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:49.800 19:36:08 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:17:49.800 19:36:08 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:17:49.800 19:36:08 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:17:49.800 19:36:08 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:17:49.800 19:36:08 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:17:49.800 19:36:08 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:49.800 19:36:08 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:17:49.800 19:36:08 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:17:49.800 19:36:08 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:49.800 19:36:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:49.800 19:36:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.800 19:36:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:49.800 ************************************ 00:17:49.800 START TEST xnvme_rpc 00:17:49.800 ************************************ 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70984 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70984 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70984 ']' 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.800 19:36:08 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:49.800 [2024-12-05 19:36:08.734912] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:17:49.800 [2024-12-05 19:36:08.735603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70984 ] 00:17:50.061 [2024-12-05 19:36:08.898380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.061 [2024-12-05 19:36:09.025888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.003 xnvme_bdev 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70984 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70984 ']' 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70984 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70984 00:17:51.003 killing process with pid 70984 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70984' 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70984 00:17:51.003 19:36:09 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70984 00:17:52.908 00:17:52.908 real 0m2.844s 00:17:52.908 user 0m2.876s 00:17:52.908 sys 0m0.442s 00:17:52.908 19:36:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.908 ************************************ 00:17:52.908 END TEST xnvme_rpc 00:17:52.908 ************************************ 00:17:52.908 19:36:11 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:52.908 19:36:11 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:52.908 19:36:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:52.908 19:36:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.908 19:36:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:52.908 ************************************ 00:17:52.908 START TEST xnvme_bdevperf 00:17:52.908 ************************************ 00:17:52.908 19:36:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:52.908 19:36:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:52.908 19:36:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:52.908 19:36:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:52.908 19:36:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:52.908 19:36:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:52.908 19:36:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:52.908 19:36:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:52.908 { 00:17:52.908 "subsystems": [ 00:17:52.908 { 00:17:52.908 "subsystem": "bdev", 00:17:52.908 "config": [ 00:17:52.908 { 00:17:52.908 "params": { 00:17:52.908 "io_mechanism": "io_uring_cmd", 00:17:52.908 "conserve_cpu": false, 00:17:52.908 "filename": "/dev/ng0n1", 00:17:52.908 "name": "xnvme_bdev" 00:17:52.908 }, 00:17:52.908 "method": "bdev_xnvme_create" 00:17:52.908 }, 00:17:52.908 { 00:17:52.908 "method": "bdev_wait_for_examine" 00:17:52.908 } 00:17:52.908 ] 00:17:52.908 } 00:17:52.908 ] 00:17:52.908 } 00:17:52.908 [2024-12-05 19:36:11.610600] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:17:52.908 [2024-12-05 19:36:11.610704] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71058 ] 00:17:52.909 [2024-12-05 19:36:11.769956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.909 [2024-12-05 19:36:11.867584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.167 Running I/O for 5 seconds... 00:17:55.475 59145.00 IOPS, 231.04 MiB/s [2024-12-05T19:36:15.467Z] 62978.00 IOPS, 246.01 MiB/s [2024-12-05T19:36:16.408Z] 63458.00 IOPS, 247.88 MiB/s [2024-12-05T19:36:17.349Z] 61927.50 IOPS, 241.90 MiB/s [2024-12-05T19:36:17.349Z] 58349.80 IOPS, 227.93 MiB/s 00:17:58.343 Latency(us) 00:17:58.343 [2024-12-05T19:36:17.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.343 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:58.343 xnvme_bdev : 5.01 58230.51 227.46 0.00 0.00 1094.99 302.47 12250.19 00:17:58.343 [2024-12-05T19:36:17.349Z] =================================================================================================================== 00:17:58.343 [2024-12-05T19:36:17.349Z] Total : 58230.51 227.46 0.00 0.00 1094.99 302.47 12250.19 00:17:58.917 19:36:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:58.917 19:36:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:58.917 19:36:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:58.917 19:36:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:58.917 19:36:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:59.177 { 00:17:59.177 "subsystems": [ 00:17:59.177 { 00:17:59.177 "subsystem": "bdev", 00:17:59.177 "config": [ 00:17:59.177 { 00:17:59.177 "params": { 00:17:59.177 "io_mechanism": "io_uring_cmd", 00:17:59.177 "conserve_cpu": false, 00:17:59.177 "filename": "/dev/ng0n1", 00:17:59.177 "name": "xnvme_bdev" 00:17:59.177 }, 00:17:59.177 "method": "bdev_xnvme_create" 00:17:59.177 }, 00:17:59.177 { 00:17:59.177 "method": "bdev_wait_for_examine" 00:17:59.177 } 00:17:59.177 ] 00:17:59.177 } 00:17:59.177 ] 00:17:59.177 } 00:17:59.177 [2024-12-05 19:36:17.982581] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:17:59.177 [2024-12-05 19:36:17.982712] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71135 ] 00:17:59.177 [2024-12-05 19:36:18.148460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.439 [2024-12-05 19:36:18.266016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.699 Running I/O for 5 seconds... 00:18:01.568 24810.00 IOPS, 96.91 MiB/s [2024-12-05T19:36:21.949Z] 26036.50 IOPS, 101.71 MiB/s [2024-12-05T19:36:22.884Z] 27108.00 IOPS, 105.89 MiB/s [2024-12-05T19:36:23.817Z] 27816.25 IOPS, 108.66 MiB/s [2024-12-05T19:36:23.817Z] 28095.60 IOPS, 109.75 MiB/s 00:18:04.811 Latency(us) 00:18:04.811 [2024-12-05T19:36:23.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.811 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:04.811 xnvme_bdev : 5.01 28064.64 109.63 0.00 0.00 2275.00 52.78 15426.17 00:18:04.811 [2024-12-05T19:36:23.817Z] =================================================================================================================== 00:18:04.811 [2024-12-05T19:36:23.817Z] Total : 28064.64 109.63 0.00 0.00 2275.00 52.78 15426.17 00:18:05.379 19:36:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:05.379 19:36:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:05.379 19:36:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:05.379 19:36:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:05.379 19:36:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:05.379 { 00:18:05.379 "subsystems": [ 00:18:05.379 { 00:18:05.379 "subsystem": "bdev", 00:18:05.379 "config": [ 00:18:05.379 { 00:18:05.379 "params": { 00:18:05.379 "io_mechanism": "io_uring_cmd", 00:18:05.379 "conserve_cpu": false, 00:18:05.379 "filename": "/dev/ng0n1", 00:18:05.379 "name": "xnvme_bdev" 00:18:05.379 }, 00:18:05.379 "method": "bdev_xnvme_create" 00:18:05.379 }, 00:18:05.379 { 00:18:05.379 "method": "bdev_wait_for_examine" 00:18:05.379 } 00:18:05.379 ] 00:18:05.379 } 00:18:05.379 ] 00:18:05.379 } 00:18:05.379 [2024-12-05 19:36:24.328916] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:18:05.379 [2024-12-05 19:36:24.329028] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71211 ] 00:18:05.639 [2024-12-05 19:36:24.488553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.639 [2024-12-05 19:36:24.586038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.897 Running I/O for 5 seconds... 00:18:08.241 83456.00 IOPS, 326.00 MiB/s [2024-12-05T19:36:28.180Z] 84064.00 IOPS, 328.38 MiB/s [2024-12-05T19:36:29.122Z] 79978.67 IOPS, 312.42 MiB/s [2024-12-05T19:36:30.066Z] 82416.00 IOPS, 321.94 MiB/s 00:18:11.060 Latency(us) 00:18:11.060 [2024-12-05T19:36:30.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.060 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:11.060 xnvme_bdev : 5.00 81129.08 316.91 0.00 0.00 785.40 519.88 2608.84 00:18:11.060 [2024-12-05T19:36:30.066Z] =================================================================================================================== 00:18:11.060 [2024-12-05T19:36:30.066Z] Total : 81129.08 316.91 0.00 0.00 785.40 519.88 2608.84 00:18:11.632 19:36:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:11.632 19:36:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:11.632 19:36:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:11.632 19:36:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:11.632 19:36:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:11.632 { 00:18:11.632 "subsystems": [ 00:18:11.632 { 00:18:11.632 "subsystem": "bdev", 00:18:11.632 "config": [ 00:18:11.632 { 00:18:11.632 "params": { 00:18:11.632 "io_mechanism": "io_uring_cmd", 00:18:11.632 "conserve_cpu": false, 00:18:11.632 "filename": "/dev/ng0n1", 00:18:11.632 "name": "xnvme_bdev" 00:18:11.632 }, 00:18:11.632 "method": "bdev_xnvme_create" 00:18:11.632 }, 00:18:11.632 { 00:18:11.632 "method": "bdev_wait_for_examine" 00:18:11.632 } 00:18:11.632 ] 00:18:11.632 } 00:18:11.632 ] 00:18:11.632 } 00:18:11.632 [2024-12-05 19:36:30.577901] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:18:11.632 [2024-12-05 19:36:30.578017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71285 ] 00:18:11.894 [2024-12-05 19:36:30.737520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.894 [2024-12-05 19:36:30.832040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.152 Running I/O for 5 seconds... 00:18:14.481 11846.00 IOPS, 46.27 MiB/s [2024-12-05T19:36:34.434Z] 11871.50 IOPS, 46.37 MiB/s [2024-12-05T19:36:35.379Z] 9838.67 IOPS, 38.43 MiB/s [2024-12-05T19:36:36.321Z] 8173.75 IOPS, 31.93 MiB/s [2024-12-05T19:36:36.321Z] 7487.60 IOPS, 29.25 MiB/s 00:18:17.315 Latency(us) 00:18:17.315 [2024-12-05T19:36:36.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.315 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:17.315 xnvme_bdev : 5.01 7485.04 29.24 0.00 0.00 8539.36 163.05 787238.60 00:18:17.315 [2024-12-05T19:36:36.321Z] =================================================================================================================== 00:18:17.315 [2024-12-05T19:36:36.321Z] Total : 7485.04 29.24 0.00 0.00 8539.36 163.05 787238.60 00:18:17.888 00:18:17.888 real 0m25.336s 00:18:17.888 user 0m14.439s 00:18:17.888 sys 0m10.431s 00:18:17.888 19:36:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.888 19:36:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:17.888 ************************************ 00:18:17.888 END TEST xnvme_bdevperf 00:18:17.888 ************************************ 00:18:18.151 19:36:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:18.151 19:36:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:18.151 19:36:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.151 19:36:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:18.151 ************************************ 00:18:18.151 START TEST xnvme_fio_plugin 00:18:18.151 ************************************ 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:18.151 19:36:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:18.151 { 00:18:18.151 "subsystems": [ 00:18:18.151 { 00:18:18.151 "subsystem": "bdev", 00:18:18.151 "config": [ 00:18:18.151 { 00:18:18.151 "params": { 00:18:18.151 "io_mechanism": "io_uring_cmd", 00:18:18.151 "conserve_cpu": false, 00:18:18.151 "filename": "/dev/ng0n1", 00:18:18.151 "name": "xnvme_bdev" 00:18:18.151 }, 00:18:18.151 "method": "bdev_xnvme_create" 00:18:18.151 }, 00:18:18.151 { 00:18:18.151 "method": "bdev_wait_for_examine" 00:18:18.151 } 00:18:18.151 ] 00:18:18.151 } 00:18:18.151 ] 00:18:18.151 } 00:18:18.151 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:18.151 fio-3.35 00:18:18.151 Starting 1 thread 00:18:24.744 00:18:24.744 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71402: Thu Dec 5 19:36:42 2024 00:18:24.744 read: IOPS=34.2k, BW=133MiB/s (140MB/s)(667MiB/5001msec) 00:18:24.744 slat (usec): min=2, max=100, avg= 3.98, stdev= 2.55 00:18:24.744 clat (usec): min=967, max=4105, avg=1710.04, stdev=304.44 00:18:24.744 lat (usec): min=970, max=4118, avg=1714.03, stdev=304.93 00:18:24.744 clat percentiles (usec): 00:18:24.744 | 1.00th=[ 1172], 5.00th=[ 1303], 10.00th=[ 1369], 20.00th=[ 1450], 00:18:24.744 | 30.00th=[ 1516], 40.00th=[ 1598], 50.00th=[ 1663], 60.00th=[ 1745], 00:18:24.744 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2278], 00:18:24.744 | 99.00th=[ 2573], 99.50th=[ 2671], 99.90th=[ 3032], 99.95th=[ 3392], 00:18:24.744 | 99.99th=[ 4047] 00:18:24.744 bw ( KiB/s): min=131584, max=139776, per=100.00%, avg=136730.89, stdev=2497.92, samples=9 00:18:24.744 iops : min=32896, max=34944, avg=34182.67, stdev=624.53, samples=9 00:18:24.744 lat (usec) : 1000=0.01% 00:18:24.744 lat (msec) : 2=83.60%, 4=16.36%, 10=0.02% 00:18:24.744 cpu : usr=36.96%, sys=61.64%, ctx=9, majf=0, minf=762 00:18:24.744 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:24.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:24.744 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:24.744 issued rwts: total=170816,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:24.744 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:24.744 00:18:24.744 Run status group 0 (all jobs): 00:18:24.744 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=667MiB (700MB), run=5001-5001msec 00:18:25.005 ----------------------------------------------------- 00:18:25.005 Suppressions used: 00:18:25.005 count bytes template 00:18:25.005 1 11 /usr/src/fio/parse.c 00:18:25.005 1 8 libtcmalloc_minimal.so 00:18:25.005 1 904 libcrypto.so 00:18:25.005 ----------------------------------------------------- 00:18:25.005 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:25.005 19:36:43 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:25.005 { 00:18:25.005 "subsystems": [ 00:18:25.005 { 00:18:25.005 "subsystem": "bdev", 00:18:25.005 "config": [ 00:18:25.005 { 00:18:25.005 "params": { 00:18:25.005 "io_mechanism": "io_uring_cmd", 00:18:25.005 "conserve_cpu": false, 00:18:25.005 "filename": "/dev/ng0n1", 00:18:25.005 "name": "xnvme_bdev" 00:18:25.005 }, 00:18:25.005 "method": "bdev_xnvme_create" 00:18:25.005 }, 00:18:25.005 { 00:18:25.005 "method": "bdev_wait_for_examine" 00:18:25.005 } 00:18:25.005 ] 00:18:25.005 } 00:18:25.005 ] 00:18:25.005 } 00:18:25.265 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:25.265 fio-3.35 00:18:25.265 Starting 1 thread 00:18:31.847 00:18:31.847 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71488: Thu Dec 5 19:36:49 2024 00:18:31.847 write: IOPS=30.9k, BW=121MiB/s (127MB/s)(604MiB/5002msec); 0 zone resets 00:18:31.847 slat (nsec): min=2913, max=92107, avg=4135.07, stdev=2568.14 00:18:31.847 clat (usec): min=81, max=19475, avg=1913.18, stdev=1747.02 00:18:31.848 lat (usec): min=84, max=19478, avg=1917.31, stdev=1747.13 00:18:31.848 clat percentiles (usec): 00:18:31.848 | 1.00th=[ 478], 5.00th=[ 996], 10.00th=[ 1205], 20.00th=[ 1369], 00:18:31.848 | 30.00th=[ 1467], 40.00th=[ 1549], 50.00th=[ 1614], 60.00th=[ 1713], 00:18:31.848 | 70.00th=[ 1811], 80.00th=[ 1926], 90.00th=[ 2147], 95.00th=[ 2409], 00:18:31.848 | 99.00th=[11994], 99.50th=[13173], 99.90th=[15926], 99.95th=[16909], 00:18:31.848 | 99.99th=[18220] 00:18:31.848 bw ( KiB/s): min=96800, max=157496, per=100.00%, avg=128381.67, stdev=19885.49, samples=9 00:18:31.848 iops : min=24200, max=39374, avg=32095.33, stdev=4971.34, samples=9 00:18:31.848 lat (usec) : 100=0.01%, 250=0.16%, 500=0.89%, 750=2.02%, 1000=1.94% 00:18:31.848 lat (msec) : 2=79.04%, 4=12.71%, 10=1.06%, 20=2.17% 00:18:31.848 cpu : usr=35.87%, sys=62.77%, ctx=42, majf=0, minf=763 00:18:31.848 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=22.8%, 32=53.4%, >=64=2.7% 00:18:31.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:31.848 complete : 0=0.0%, 4=98.0%, 8=0.2%, 16=0.2%, 32=0.1%, 64=1.4%, >=64=0.0% 00:18:31.848 issued rwts: total=0,154678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:31.848 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:31.848 00:18:31.848 Run status group 0 (all jobs): 00:18:31.848 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=604MiB (634MB), run=5002-5002msec 00:18:31.848 ----------------------------------------------------- 00:18:31.848 Suppressions used: 00:18:31.848 count bytes template 00:18:31.848 1 11 /usr/src/fio/parse.c 00:18:31.848 1 8 libtcmalloc_minimal.so 00:18:31.848 1 904 libcrypto.so 00:18:31.848 ----------------------------------------------------- 00:18:31.848 00:18:31.848 00:18:31.848 real 0m13.818s 00:18:31.848 user 0m6.567s 00:18:31.848 sys 0m6.792s 00:18:31.848 ************************************ 00:18:31.848 END TEST xnvme_fio_plugin 00:18:31.848 ************************************ 00:18:31.848 19:36:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.848 19:36:50 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 19:36:50 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:18:31.848 19:36:50 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:18:31.848 19:36:50 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:18:31.848 19:36:50 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:18:31.848 19:36:50 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:31.848 19:36:50 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.848 19:36:50 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:31.848 ************************************ 00:18:31.848 START TEST xnvme_rpc 00:18:31.848 ************************************ 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:18:31.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71568 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71568 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71568 ']' 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.848 19:36:50 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.110 [2024-12-05 19:36:50.929675] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:18:32.110 [2024-12-05 19:36:50.929832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71568 ] 00:18:32.110 [2024-12-05 19:36:51.093568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.372 [2024-12-05 19:36:51.212960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.945 xnvme_bdev 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:32.945 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.206 19:36:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71568 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71568 ']' 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71568 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71568 00:18:33.206 killing process with pid 71568 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71568' 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71568 00:18:33.206 19:36:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71568 00:18:35.121 ************************************ 00:18:35.121 END TEST xnvme_rpc 00:18:35.121 ************************************ 00:18:35.121 00:18:35.121 real 0m2.876s 00:18:35.121 user 0m2.916s 00:18:35.121 sys 0m0.431s 00:18:35.121 19:36:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.121 19:36:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:35.121 19:36:53 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:18:35.121 19:36:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:35.121 19:36:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.121 19:36:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:35.121 ************************************ 00:18:35.121 START TEST xnvme_bdevperf 00:18:35.121 ************************************ 00:18:35.121 19:36:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:18:35.121 19:36:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:18:35.121 19:36:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:18:35.121 19:36:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:35.121 19:36:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:18:35.121 19:36:53 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:35.121 19:36:53 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:35.121 19:36:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:35.121 { 00:18:35.121 "subsystems": [ 00:18:35.121 { 00:18:35.121 "subsystem": "bdev", 00:18:35.121 "config": [ 00:18:35.121 { 00:18:35.121 "params": { 00:18:35.121 "io_mechanism": "io_uring_cmd", 00:18:35.121 "conserve_cpu": true, 00:18:35.121 "filename": "/dev/ng0n1", 00:18:35.121 "name": "xnvme_bdev" 00:18:35.121 }, 00:18:35.121 "method": "bdev_xnvme_create" 00:18:35.121 }, 00:18:35.121 { 00:18:35.121 "method": "bdev_wait_for_examine" 00:18:35.121 } 00:18:35.121 ] 00:18:35.121 } 00:18:35.121 ] 00:18:35.121 } 00:18:35.121 [2024-12-05 19:36:53.857055] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:18:35.121 [2024-12-05 19:36:53.857392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71642 ] 00:18:35.121 [2024-12-05 19:36:54.021477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.384 [2024-12-05 19:36:54.142023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.645 Running I/O for 5 seconds... 00:18:37.528 35703.00 IOPS, 139.46 MiB/s [2024-12-05T19:36:57.476Z] 35695.50 IOPS, 139.44 MiB/s [2024-12-05T19:36:58.880Z] 35310.67 IOPS, 137.93 MiB/s [2024-12-05T19:36:59.460Z] 35248.75 IOPS, 137.69 MiB/s 00:18:40.454 Latency(us) 00:18:40.454 [2024-12-05T19:36:59.460Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.454 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:18:40.454 xnvme_bdev : 5.00 35099.58 137.11 0.00 0.00 1818.90 850.71 11746.07 00:18:40.454 [2024-12-05T19:36:59.460Z] =================================================================================================================== 00:18:40.454 [2024-12-05T19:36:59.460Z] Total : 35099.58 137.11 0.00 0.00 1818.90 850.71 11746.07 00:18:41.503 19:37:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:41.503 19:37:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:18:41.503 19:37:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:41.503 19:37:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:41.503 19:37:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:41.503 { 00:18:41.503 "subsystems": [ 00:18:41.503 { 00:18:41.503 "subsystem": "bdev", 00:18:41.503 "config": [ 00:18:41.503 { 00:18:41.503 "params": { 00:18:41.503 "io_mechanism": "io_uring_cmd", 00:18:41.503 "conserve_cpu": true, 00:18:41.503 "filename": "/dev/ng0n1", 00:18:41.503 "name": "xnvme_bdev" 00:18:41.503 }, 00:18:41.503 "method": "bdev_xnvme_create" 00:18:41.503 }, 00:18:41.503 { 00:18:41.503 "method": "bdev_wait_for_examine" 00:18:41.503 } 00:18:41.503 ] 00:18:41.503 } 00:18:41.503 ] 00:18:41.503 } 00:18:41.503 [2024-12-05 19:37:00.290267] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:18:41.503 [2024-12-05 19:37:00.290576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71722 ] 00:18:41.503 [2024-12-05 19:37:00.456530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.764 [2024-12-05 19:37:00.577321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:42.026 Running I/O for 5 seconds... 00:18:43.910 25415.00 IOPS, 99.28 MiB/s [2024-12-05T19:37:04.297Z] 30059.50 IOPS, 117.42 MiB/s [2024-12-05T19:37:05.233Z] 31686.00 IOPS, 123.77 MiB/s [2024-12-05T19:37:06.168Z] 32592.75 IOPS, 127.32 MiB/s [2024-12-05T19:37:06.168Z] 29976.00 IOPS, 117.09 MiB/s 00:18:47.162 Latency(us) 00:18:47.162 [2024-12-05T19:37:06.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.162 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:18:47.162 xnvme_bdev : 5.01 29933.91 116.93 0.00 0.00 2131.82 85.07 72997.02 00:18:47.162 [2024-12-05T19:37:06.168Z] =================================================================================================================== 00:18:47.162 [2024-12-05T19:37:06.168Z] Total : 29933.91 116.93 0.00 0.00 2131.82 85.07 72997.02 00:18:47.728 19:37:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:47.728 19:37:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:18:47.728 19:37:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:47.728 19:37:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:47.728 19:37:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:47.728 { 00:18:47.728 "subsystems": [ 00:18:47.728 { 00:18:47.728 "subsystem": "bdev", 00:18:47.728 "config": [ 00:18:47.728 { 00:18:47.728 "params": { 00:18:47.728 "io_mechanism": "io_uring_cmd", 00:18:47.728 "conserve_cpu": true, 00:18:47.728 "filename": "/dev/ng0n1", 00:18:47.728 "name": "xnvme_bdev" 00:18:47.728 }, 00:18:47.728 "method": "bdev_xnvme_create" 00:18:47.728 }, 00:18:47.728 { 00:18:47.728 "method": "bdev_wait_for_examine" 00:18:47.729 } 00:18:47.729 ] 00:18:47.729 } 00:18:47.729 ] 00:18:47.729 } 00:18:47.729 [2024-12-05 19:37:06.731983] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:18:47.729 [2024-12-05 19:37:06.732149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71795 ] 00:18:47.988 [2024-12-05 19:37:06.896535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.247 [2024-12-05 19:37:07.016038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.514 Running I/O for 5 seconds... 00:18:50.396 79232.00 IOPS, 309.50 MiB/s [2024-12-05T19:37:10.343Z] 79360.00 IOPS, 310.00 MiB/s [2024-12-05T19:37:11.727Z] 79061.33 IOPS, 308.83 MiB/s [2024-12-05T19:37:12.669Z] 82704.00 IOPS, 323.06 MiB/s [2024-12-05T19:37:12.669Z] 82329.60 IOPS, 321.60 MiB/s 00:18:53.663 Latency(us) 00:18:53.663 [2024-12-05T19:37:12.669Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.663 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:18:53.663 xnvme_bdev : 5.00 82302.32 321.49 0.00 0.00 774.23 343.43 3654.89 00:18:53.663 [2024-12-05T19:37:12.669Z] =================================================================================================================== 00:18:53.663 [2024-12-05T19:37:12.669Z] Total : 82302.32 321.49 0.00 0.00 774.23 343.43 3654.89 00:18:54.235 19:37:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:54.235 19:37:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:18:54.235 19:37:13 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:18:54.235 19:37:13 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:18:54.235 19:37:13 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:54.235 { 00:18:54.235 "subsystems": [ 00:18:54.235 { 00:18:54.235 "subsystem": "bdev", 00:18:54.235 "config": [ 00:18:54.235 { 00:18:54.235 "params": { 00:18:54.235 "io_mechanism": "io_uring_cmd", 00:18:54.235 "conserve_cpu": true, 00:18:54.235 "filename": "/dev/ng0n1", 00:18:54.235 "name": "xnvme_bdev" 00:18:54.235 }, 00:18:54.235 "method": "bdev_xnvme_create" 00:18:54.235 }, 00:18:54.235 { 00:18:54.235 "method": "bdev_wait_for_examine" 00:18:54.235 } 00:18:54.235 ] 00:18:54.235 } 00:18:54.235 ] 00:18:54.235 } 00:18:54.235 [2024-12-05 19:37:13.153313] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:18:54.235 [2024-12-05 19:37:13.153411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71866 ] 00:18:54.495 [2024-12-05 19:37:13.308627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.495 [2024-12-05 19:37:13.428108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.755 Running I/O for 5 seconds... 00:18:57.079 33936.00 IOPS, 132.56 MiB/s [2024-12-05T19:37:17.031Z] 31545.00 IOPS, 123.22 MiB/s [2024-12-05T19:37:17.972Z] 27663.33 IOPS, 108.06 MiB/s [2024-12-05T19:37:18.914Z] 25269.75 IOPS, 98.71 MiB/s [2024-12-05T19:37:18.914Z] 22094.20 IOPS, 86.31 MiB/s 00:18:59.908 Latency(us) 00:18:59.908 [2024-12-05T19:37:18.914Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.908 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:59.908 xnvme_bdev : 5.10 21663.13 84.62 0.00 0.00 2946.84 54.74 487184.54 00:18:59.908 [2024-12-05T19:37:18.914Z] =================================================================================================================== 00:18:59.908 [2024-12-05T19:37:18.914Z] Total : 21663.13 84.62 0.00 0.00 2946.84 54.74 487184.54 00:19:00.850 00:19:00.850 real 0m25.821s 00:19:00.850 user 0m17.460s 00:19:00.850 sys 0m6.571s 00:19:00.850 19:37:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:00.850 ************************************ 00:19:00.850 END TEST xnvme_bdevperf 00:19:00.850 ************************************ 00:19:00.850 19:37:19 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:00.850 19:37:19 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:19:00.850 19:37:19 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:00.850 19:37:19 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:00.850 19:37:19 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.850 ************************************ 00:19:00.850 START TEST xnvme_fio_plugin 00:19:00.850 ************************************ 00:19:00.850 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:19:00.850 19:37:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:19:00.850 19:37:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:00.851 19:37:19 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:00.851 { 00:19:00.851 "subsystems": [ 00:19:00.851 { 00:19:00.851 "subsystem": "bdev", 00:19:00.851 "config": [ 00:19:00.851 { 00:19:00.851 "params": { 00:19:00.851 "io_mechanism": "io_uring_cmd", 00:19:00.851 "conserve_cpu": true, 00:19:00.851 "filename": "/dev/ng0n1", 00:19:00.851 "name": "xnvme_bdev" 00:19:00.851 }, 00:19:00.851 "method": "bdev_xnvme_create" 00:19:00.851 }, 00:19:00.851 { 00:19:00.851 "method": "bdev_wait_for_examine" 00:19:00.851 } 00:19:00.851 ] 00:19:00.851 } 00:19:00.851 ] 00:19:00.851 } 00:19:01.111 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:01.111 fio-3.35 00:19:01.111 Starting 1 thread 00:19:07.701 00:19:07.701 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71984: Thu Dec 5 19:37:25 2024 00:19:07.701 read: IOPS=35.0k, BW=137MiB/s (143MB/s)(683MiB/5001msec) 00:19:07.701 slat (usec): min=2, max=136, avg= 3.82, stdev= 2.27 00:19:07.701 clat (usec): min=892, max=3316, avg=1674.36, stdev=295.66 00:19:07.701 lat (usec): min=895, max=3453, avg=1678.18, stdev=296.25 00:19:07.701 clat percentiles (usec): 00:19:07.701 | 1.00th=[ 1123], 5.00th=[ 1254], 10.00th=[ 1336], 20.00th=[ 1434], 00:19:07.701 | 30.00th=[ 1500], 40.00th=[ 1565], 50.00th=[ 1647], 60.00th=[ 1713], 00:19:07.701 | 70.00th=[ 1795], 80.00th=[ 1909], 90.00th=[ 2057], 95.00th=[ 2212], 00:19:07.701 | 99.00th=[ 2507], 99.50th=[ 2671], 99.90th=[ 2966], 99.95th=[ 3032], 00:19:07.701 | 99.99th=[ 3130] 00:19:07.701 bw ( KiB/s): min=134144, max=156160, per=100.00%, avg=140401.78, stdev=6915.69, samples=9 00:19:07.701 iops : min=33536, max=39040, avg=35100.44, stdev=1728.92, samples=9 00:19:07.701 lat (usec) : 1000=0.11% 00:19:07.701 lat (msec) : 2=86.77%, 4=13.12% 00:19:07.701 cpu : usr=56.84%, sys=39.92%, ctx=14, majf=0, minf=762 00:19:07.701 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:19:07.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.701 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:19:07.701 issued rwts: total=174944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.701 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:07.701 00:19:07.701 Run status group 0 (all jobs): 00:19:07.701 READ: bw=137MiB/s (143MB/s), 137MiB/s-137MiB/s (143MB/s-143MB/s), io=683MiB (717MB), run=5001-5001msec 00:19:07.701 ----------------------------------------------------- 00:19:07.701 Suppressions used: 00:19:07.701 count bytes template 00:19:07.701 1 11 /usr/src/fio/parse.c 00:19:07.701 1 8 libtcmalloc_minimal.so 00:19:07.701 1 904 libcrypto.so 00:19:07.701 ----------------------------------------------------- 00:19:07.701 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:19:07.701 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:07.702 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:07.702 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:07.702 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:19:07.702 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:07.702 19:37:26 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:19:07.702 { 00:19:07.702 "subsystems": [ 00:19:07.702 { 00:19:07.702 "subsystem": "bdev", 00:19:07.702 "config": [ 00:19:07.702 { 00:19:07.702 "params": { 00:19:07.702 "io_mechanism": "io_uring_cmd", 00:19:07.702 "conserve_cpu": true, 00:19:07.702 "filename": "/dev/ng0n1", 00:19:07.702 "name": "xnvme_bdev" 00:19:07.702 }, 00:19:07.702 "method": "bdev_xnvme_create" 00:19:07.702 }, 00:19:07.702 { 00:19:07.702 "method": "bdev_wait_for_examine" 00:19:07.702 } 00:19:07.702 ] 00:19:07.702 } 00:19:07.702 ] 00:19:07.702 } 00:19:07.959 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:19:07.959 fio-3.35 00:19:07.959 Starting 1 thread 00:19:14.529 00:19:14.529 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72075: Thu Dec 5 19:37:32 2024 00:19:14.529 write: IOPS=27.3k, BW=107MiB/s (112MB/s)(535MiB/5018msec); 0 zone resets 00:19:14.529 slat (usec): min=2, max=185, avg= 3.81, stdev= 2.28 00:19:14.529 clat (usec): min=59, max=33484, avg=2205.80, stdev=3565.08 00:19:14.529 lat (usec): min=63, max=33488, avg=2209.61, stdev=3565.17 00:19:14.529 clat percentiles (usec): 00:19:14.529 | 1.00th=[ 363], 5.00th=[ 898], 10.00th=[ 1074], 20.00th=[ 1188], 00:19:14.529 | 30.00th=[ 1270], 40.00th=[ 1336], 50.00th=[ 1418], 60.00th=[ 1483], 00:19:14.529 | 70.00th=[ 1565], 80.00th=[ 1680], 90.00th=[ 1909], 95.00th=[12387], 00:19:14.529 | 99.00th=[19268], 99.50th=[20841], 99.90th=[24249], 99.95th=[25035], 00:19:14.529 | 99.99th=[30278] 00:19:14.529 bw ( KiB/s): min=28296, max=169776, per=100.00%, avg=109566.40, stdev=67194.96, samples=10 00:19:14.529 iops : min= 7074, max=42444, avg=27391.60, stdev=16798.74, samples=10 00:19:14.529 lat (usec) : 100=0.04%, 250=0.48%, 500=1.55%, 750=2.08%, 1000=2.51% 00:19:14.529 lat (msec) : 2=85.21%, 4=2.98%, 10=0.03%, 20=4.38%, 50=0.74% 00:19:14.529 cpu : usr=72.85%, sys=22.86%, ctx=10, majf=0, minf=763 00:19:14.529 IO depths : 1=1.4%, 2=2.7%, 4=5.5%, 8=11.2%, 16=22.4%, 32=52.7%, >=64=4.2% 00:19:14.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:14.529 complete : 0=0.0%, 4=97.9%, 8=0.5%, 16=0.2%, 32=0.1%, 64=1.3%, >=64=0.0% 00:19:14.529 issued rwts: total=0,137019,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:14.529 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:14.529 00:19:14.529 Run status group 0 (all jobs): 00:19:14.529 WRITE: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=535MiB (561MB), run=5018-5018msec 00:19:14.529 ----------------------------------------------------- 00:19:14.529 Suppressions used: 00:19:14.529 count bytes template 00:19:14.529 1 11 /usr/src/fio/parse.c 00:19:14.529 1 8 libtcmalloc_minimal.so 00:19:14.529 1 904 libcrypto.so 00:19:14.529 ----------------------------------------------------- 00:19:14.529 00:19:14.529 00:19:14.529 real 0m13.670s 00:19:14.529 user 0m9.285s 00:19:14.529 sys 0m3.691s 00:19:14.529 19:37:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.529 ************************************ 00:19:14.529 END TEST xnvme_fio_plugin 00:19:14.529 ************************************ 00:19:14.529 19:37:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:19:14.529 19:37:33 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 71568 00:19:14.529 19:37:33 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 71568 ']' 00:19:14.529 Process with pid 71568 is not found 00:19:14.529 19:37:33 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 71568 00:19:14.529 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (71568) - No such process 00:19:14.529 19:37:33 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 71568 is not found' 00:19:14.529 00:19:14.529 real 3m30.859s 00:19:14.529 user 2m3.219s 00:19:14.529 sys 1m13.591s 00:19:14.529 19:37:33 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.529 19:37:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.529 ************************************ 00:19:14.529 END TEST nvme_xnvme 00:19:14.529 ************************************ 00:19:14.529 19:37:33 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:14.529 19:37:33 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.529 19:37:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.529 19:37:33 -- common/autotest_common.sh@10 -- # set +x 00:19:14.529 ************************************ 00:19:14.529 START TEST blockdev_xnvme 00:19:14.529 ************************************ 00:19:14.529 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:19:14.529 * Looking for test storage... 00:19:14.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:14.529 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:14.529 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:19:14.529 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.788 19:37:33 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.788 --rc genhtml_branch_coverage=1 00:19:14.788 --rc genhtml_function_coverage=1 00:19:14.788 --rc genhtml_legend=1 00:19:14.788 --rc geninfo_all_blocks=1 00:19:14.788 --rc geninfo_unexecuted_blocks=1 00:19:14.788 00:19:14.788 ' 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.788 --rc genhtml_branch_coverage=1 00:19:14.788 --rc genhtml_function_coverage=1 00:19:14.788 --rc genhtml_legend=1 00:19:14.788 --rc geninfo_all_blocks=1 00:19:14.788 --rc geninfo_unexecuted_blocks=1 00:19:14.788 00:19:14.788 ' 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.788 --rc genhtml_branch_coverage=1 00:19:14.788 --rc genhtml_function_coverage=1 00:19:14.788 --rc genhtml_legend=1 00:19:14.788 --rc geninfo_all_blocks=1 00:19:14.788 --rc geninfo_unexecuted_blocks=1 00:19:14.788 00:19:14.788 ' 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:14.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.788 --rc genhtml_branch_coverage=1 00:19:14.788 --rc genhtml_function_coverage=1 00:19:14.788 --rc genhtml_legend=1 00:19:14.788 --rc geninfo_all_blocks=1 00:19:14.788 --rc geninfo_unexecuted_blocks=1 00:19:14.788 00:19:14.788 ' 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=72213 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 72213 00:19:14.788 19:37:33 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 72213 ']' 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.788 19:37:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.788 [2024-12-05 19:37:33.674907] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:19:14.789 [2024-12-05 19:37:33.675190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72213 ] 00:19:15.047 [2024-12-05 19:37:33.828303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.047 [2024-12-05 19:37:33.924243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.612 19:37:34 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.612 19:37:34 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:19:15.612 19:37:34 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:15.612 19:37:34 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:19:15.612 19:37:34 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:19:15.612 19:37:34 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:19:15.612 19:37:34 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:19:16.181 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:16.442 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:19:16.442 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:19:16.700 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:19:16.700 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:19:16.700 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:19:16.700 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:19:16.700 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:19:16.700 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:19:16.700 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:19:16.700 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:19:16.700 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:16.700 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:19:16.701 nvme0n1 00:19:16.701 nvme0n2 00:19:16.701 nvme0n3 00:19:16.701 nvme1n1 00:19:16.701 nvme2n1 00:19:16.701 nvme3n1 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:16.701 19:37:35 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.701 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:16.702 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "63d6e8c1-9e86-4152-a8d0-7fb96fdf2084"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "63d6e8c1-9e86-4152-a8d0-7fb96fdf2084",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "cdd3a312-46f0-43b4-b8bd-a35da6c9985f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cdd3a312-46f0-43b4-b8bd-a35da6c9985f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "304b04cb-e734-40cf-a1c4-6ddec1d431aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "304b04cb-e734-40cf-a1c4-6ddec1d431aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "290dcd6d-1359-4bd3-ad01-ad282e481185"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "290dcd6d-1359-4bd3-ad01-ad282e481185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "311ff68b-c39c-447e-b784-6b89dcdce020"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "311ff68b-c39c-447e-b784-6b89dcdce020",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f3b591a0-0571-42ec-9ac9-19eb617bf609"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f3b591a0-0571-42ec-9ac9-19eb617bf609",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:16.702 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:16.702 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:16.702 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:19:16.702 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:16.702 19:37:35 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 72213 00:19:16.702 19:37:35 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72213 ']' 00:19:16.702 19:37:35 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 72213 00:19:16.702 19:37:35 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:19:16.702 19:37:35 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:16.702 19:37:35 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72213 00:19:16.960 killing process with pid 72213 00:19:16.960 19:37:35 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:16.960 19:37:35 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:16.960 19:37:35 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72213' 00:19:16.960 19:37:35 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 72213 00:19:16.960 19:37:35 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 72213 00:19:18.334 19:37:37 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:18.334 19:37:37 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:18.334 19:37:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:18.334 19:37:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:18.334 19:37:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:18.334 ************************************ 00:19:18.334 START TEST bdev_hello_world 00:19:18.334 ************************************ 00:19:18.334 19:37:37 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:19:18.334 [2024-12-05 19:37:37.286525] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:19:18.334 [2024-12-05 19:37:37.286763] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72492 ] 00:19:18.592 [2024-12-05 19:37:37.446228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.592 [2024-12-05 19:37:37.541488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.159 [2024-12-05 19:37:37.897621] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:19.159 [2024-12-05 19:37:37.897789] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:19:19.159 [2024-12-05 19:37:37.897811] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:19.159 [2024-12-05 19:37:37.899657] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:19.159 [2024-12-05 19:37:37.900163] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:19.159 [2024-12-05 19:37:37.900183] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:19.159 [2024-12-05 19:37:37.901171] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:19.159 00:19:19.159 [2024-12-05 19:37:37.901212] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:19.725 00:19:19.725 real 0m1.387s 00:19:19.725 user 0m1.092s 00:19:19.725 sys 0m0.156s 00:19:19.725 19:37:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:19.725 ************************************ 00:19:19.725 END TEST bdev_hello_world 00:19:19.725 ************************************ 00:19:19.725 19:37:38 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:19.725 19:37:38 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:19.725 19:37:38 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:19.725 19:37:38 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:19.725 19:37:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.725 ************************************ 00:19:19.725 START TEST bdev_bounds 00:19:19.725 ************************************ 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=72524 00:19:19.725 Process bdevio pid: 72524 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 72524' 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 72524 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 72524 ']' 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:19.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:19.725 19:37:38 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:19.983 [2024-12-05 19:37:38.741503] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:19:19.983 [2024-12-05 19:37:38.741765] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72524 ] 00:19:19.983 [2024-12-05 19:37:38.900368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:20.241 [2024-12-05 19:37:38.999480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:20.241 [2024-12-05 19:37:38.999715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:20.241 [2024-12-05 19:37:38.999727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:20.807 19:37:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.807 19:37:39 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:20.807 19:37:39 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:20.807 I/O targets: 00:19:20.807 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:20.807 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:20.807 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:19:20.807 nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:19:20.807 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:19:20.807 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:19:20.807 00:19:20.807 00:19:20.807 CUnit - A unit testing framework for C - Version 2.1-3 00:19:20.807 http://cunit.sourceforge.net/ 00:19:20.807 00:19:20.807 00:19:20.807 Suite: bdevio tests on: nvme3n1 00:19:20.807 Test: blockdev write read block ...passed 00:19:20.807 Test: blockdev write zeroes read block ...passed 00:19:20.807 Test: blockdev write zeroes read no split ...passed 00:19:20.807 Test: blockdev write zeroes read split ...passed 00:19:20.807 Test: blockdev write zeroes read split partial ...passed 00:19:20.807 Test: blockdev reset ...passed 00:19:20.807 Test: blockdev write read 8 blocks ...passed 00:19:20.807 Test: blockdev write read size > 128k ...passed 00:19:20.807 Test: blockdev write read invalid size ...passed 00:19:20.807 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.807 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.807 Test: blockdev write read max offset ...passed 00:19:20.807 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.807 Test: blockdev writev readv 8 blocks ...passed 00:19:20.807 Test: blockdev writev readv 30 x 1block ...passed 00:19:20.807 Test: blockdev writev readv block ...passed 00:19:20.807 Test: blockdev writev readv size > 128k ...passed 00:19:20.807 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:20.807 Test: blockdev comparev and writev ...passed 00:19:20.807 Test: blockdev nvme passthru rw ...passed 00:19:20.807 Test: blockdev nvme passthru vendor specific ...passed 00:19:20.807 Test: blockdev nvme admin passthru ...passed 00:19:20.807 Test: blockdev copy ...passed 00:19:20.807 Suite: bdevio tests on: nvme2n1 00:19:20.807 Test: blockdev write read block ...passed 00:19:20.807 Test: blockdev write zeroes read block ...passed 00:19:20.807 Test: blockdev write zeroes read no split ...passed 00:19:20.807 Test: blockdev write zeroes read split ...passed 00:19:20.807 Test: blockdev write zeroes read split partial ...passed 00:19:20.807 Test: blockdev reset ...passed 00:19:20.807 Test: blockdev write read 8 blocks ...passed 00:19:20.807 Test: blockdev write read size > 128k ...passed 00:19:20.807 Test: blockdev write read invalid size ...passed 00:19:20.807 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:20.807 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:20.807 Test: blockdev write read max offset ...passed 00:19:20.807 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:20.807 Test: blockdev writev readv 8 blocks ...passed 00:19:21.066 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.066 Test: blockdev writev readv block ...passed 00:19:21.066 Test: blockdev writev readv size > 128k ...passed 00:19:21.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.066 Test: blockdev comparev and writev ...passed 00:19:21.066 Test: blockdev nvme passthru rw ...passed 00:19:21.066 Test: blockdev nvme passthru vendor specific ...passed 00:19:21.066 Test: blockdev nvme admin passthru ...passed 00:19:21.066 Test: blockdev copy ...passed 00:19:21.066 Suite: bdevio tests on: nvme1n1 00:19:21.066 Test: blockdev write read block ...passed 00:19:21.066 Test: blockdev write zeroes read block ...passed 00:19:21.066 Test: blockdev write zeroes read no split ...passed 00:19:21.066 Test: blockdev write zeroes read split ...passed 00:19:21.066 Test: blockdev write zeroes read split partial ...passed 00:19:21.066 Test: blockdev reset ...passed 00:19:21.066 Test: blockdev write read 8 blocks ...passed 00:19:21.066 Test: blockdev write read size > 128k ...passed 00:19:21.066 Test: blockdev write read invalid size ...passed 00:19:21.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:21.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:21.066 Test: blockdev write read max offset ...passed 00:19:21.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.066 Test: blockdev writev readv 8 blocks ...passed 00:19:21.066 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.066 Test: blockdev writev readv block ...passed 00:19:21.066 Test: blockdev writev readv size > 128k ...passed 00:19:21.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.066 Test: blockdev comparev and writev ...passed 00:19:21.066 Test: blockdev nvme passthru rw ...passed 00:19:21.066 Test: blockdev nvme passthru vendor specific ...passed 00:19:21.066 Test: blockdev nvme admin passthru ...passed 00:19:21.066 Test: blockdev copy ...passed 00:19:21.066 Suite: bdevio tests on: nvme0n3 00:19:21.066 Test: blockdev write read block ...passed 00:19:21.066 Test: blockdev write zeroes read block ...passed 00:19:21.066 Test: blockdev write zeroes read no split ...passed 00:19:21.066 Test: blockdev write zeroes read split ...passed 00:19:21.066 Test: blockdev write zeroes read split partial ...passed 00:19:21.066 Test: blockdev reset ...passed 00:19:21.066 Test: blockdev write read 8 blocks ...passed 00:19:21.066 Test: blockdev write read size > 128k ...passed 00:19:21.066 Test: blockdev write read invalid size ...passed 00:19:21.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:21.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:21.066 Test: blockdev write read max offset ...passed 00:19:21.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.066 Test: blockdev writev readv 8 blocks ...passed 00:19:21.066 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.066 Test: blockdev writev readv block ...passed 00:19:21.066 Test: blockdev writev readv size > 128k ...passed 00:19:21.066 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.066 Test: blockdev comparev and writev ...passed 00:19:21.066 Test: blockdev nvme passthru rw ...passed 00:19:21.066 Test: blockdev nvme passthru vendor specific ...passed 00:19:21.066 Test: blockdev nvme admin passthru ...passed 00:19:21.066 Test: blockdev copy ...passed 00:19:21.066 Suite: bdevio tests on: nvme0n2 00:19:21.066 Test: blockdev write read block ...passed 00:19:21.066 Test: blockdev write zeroes read block ...passed 00:19:21.066 Test: blockdev write zeroes read no split ...passed 00:19:21.066 Test: blockdev write zeroes read split ...passed 00:19:21.066 Test: blockdev write zeroes read split partial ...passed 00:19:21.066 Test: blockdev reset ...passed 00:19:21.066 Test: blockdev write read 8 blocks ...passed 00:19:21.066 Test: blockdev write read size > 128k ...passed 00:19:21.066 Test: blockdev write read invalid size ...passed 00:19:21.066 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:21.066 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:21.066 Test: blockdev write read max offset ...passed 00:19:21.066 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.066 Test: blockdev writev readv 8 blocks ...passed 00:19:21.067 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.067 Test: blockdev writev readv block ...passed 00:19:21.067 Test: blockdev writev readv size > 128k ...passed 00:19:21.067 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.067 Test: blockdev comparev and writev ...passed 00:19:21.067 Test: blockdev nvme passthru rw ...passed 00:19:21.067 Test: blockdev nvme passthru vendor specific ...passed 00:19:21.067 Test: blockdev nvme admin passthru ...passed 00:19:21.067 Test: blockdev copy ...passed 00:19:21.067 Suite: bdevio tests on: nvme0n1 00:19:21.067 Test: blockdev write read block ...passed 00:19:21.067 Test: blockdev write zeroes read block ...passed 00:19:21.067 Test: blockdev write zeroes read no split ...passed 00:19:21.324 Test: blockdev write zeroes read split ...passed 00:19:21.324 Test: blockdev write zeroes read split partial ...passed 00:19:21.324 Test: blockdev reset ...passed 00:19:21.324 Test: blockdev write read 8 blocks ...passed 00:19:21.324 Test: blockdev write read size > 128k ...passed 00:19:21.324 Test: blockdev write read invalid size ...passed 00:19:21.324 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:21.324 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:21.324 Test: blockdev write read max offset ...passed 00:19:21.324 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:21.324 Test: blockdev writev readv 8 blocks ...passed 00:19:21.324 Test: blockdev writev readv 30 x 1block ...passed 00:19:21.324 Test: blockdev writev readv block ...passed 00:19:21.324 Test: blockdev writev readv size > 128k ...passed 00:19:21.324 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:21.324 Test: blockdev comparev and writev ...passed 00:19:21.324 Test: blockdev nvme passthru rw ...passed 00:19:21.324 Test: blockdev nvme passthru vendor specific ...passed 00:19:21.324 Test: blockdev nvme admin passthru ...passed 00:19:21.324 Test: blockdev copy ...passed 00:19:21.324 00:19:21.324 Run Summary: Type Total Ran Passed Failed Inactive 00:19:21.324 suites 6 6 n/a 0 0 00:19:21.324 tests 138 138 138 0 0 00:19:21.324 asserts 780 780 780 0 n/a 00:19:21.324 00:19:21.324 Elapsed time = 1.375 seconds 00:19:21.324 0 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 72524 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 72524 ']' 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 72524 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72524 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72524' 00:19:21.324 killing process with pid 72524 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 72524 00:19:21.324 19:37:40 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 72524 00:19:22.256 19:37:41 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:22.257 ************************************ 00:19:22.257 END TEST bdev_bounds 00:19:22.257 ************************************ 00:19:22.257 00:19:22.257 real 0m2.325s 00:19:22.257 user 0m5.696s 00:19:22.257 sys 0m0.299s 00:19:22.257 19:37:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:22.257 19:37:41 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:22.257 19:37:41 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:22.257 19:37:41 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:22.257 19:37:41 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:22.257 19:37:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:22.257 ************************************ 00:19:22.257 START TEST bdev_nbd 00:19:22.257 ************************************ 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:22.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=72584 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 72584 /var/tmp/spdk-nbd.sock 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 72584 ']' 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.257 19:37:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:22.257 [2024-12-05 19:37:41.132901] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:19:22.257 [2024-12-05 19:37:41.133016] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.514 [2024-12-05 19:37:41.293051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.514 [2024-12-05 19:37:41.389408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.078 19:37:41 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:19:23.335 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:23.335 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:23.335 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:23.335 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:23.335 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.336 1+0 records in 00:19:23.336 1+0 records out 00:19:23.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000850943 s, 4.8 MB/s 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.336 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.625 1+0 records in 00:19:23.625 1+0 records out 00:19:23.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000846071 s, 4.8 MB/s 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.625 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:23.884 1+0 records in 00:19:23.884 1+0 records out 00:19:23.884 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114367 s, 3.6 MB/s 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:23.884 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.142 1+0 records in 00:19:24.142 1+0 records out 00:19:24.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814088 s, 5.0 MB/s 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:24.142 19:37:42 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.142 1+0 records in 00:19:24.142 1+0 records out 00:19:24.142 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000680247 s, 6.0 MB/s 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:24.142 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:24.414 1+0 records in 00:19:24.414 1+0 records out 00:19:24.414 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000984071 s, 4.2 MB/s 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:19:24.414 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:24.700 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd0", 00:19:24.700 "bdev_name": "nvme0n1" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd1", 00:19:24.700 "bdev_name": "nvme0n2" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd2", 00:19:24.700 "bdev_name": "nvme0n3" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd3", 00:19:24.700 "bdev_name": "nvme1n1" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd4", 00:19:24.700 "bdev_name": "nvme2n1" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd5", 00:19:24.700 "bdev_name": "nvme3n1" 00:19:24.700 } 00:19:24.700 ]' 00:19:24.700 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:24.700 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd0", 00:19:24.700 "bdev_name": "nvme0n1" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd1", 00:19:24.700 "bdev_name": "nvme0n2" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd2", 00:19:24.700 "bdev_name": "nvme0n3" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd3", 00:19:24.700 "bdev_name": "nvme1n1" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd4", 00:19:24.700 "bdev_name": "nvme2n1" 00:19:24.700 }, 00:19:24.700 { 00:19:24.700 "nbd_device": "/dev/nbd5", 00:19:24.700 "bdev_name": "nvme3n1" 00:19:24.700 } 00:19:24.700 ]' 00:19:24.700 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:24.700 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:19:24.700 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:24.701 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:19:24.701 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:24.701 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:24.701 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:24.701 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:24.959 19:37:43 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:19:25.216 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.473 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:25.730 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:25.988 19:37:44 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:26.273 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:19:26.531 /dev/nbd0 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.531 1+0 records in 00:19:26.531 1+0 records out 00:19:26.531 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441097 s, 9.3 MB/s 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:26.531 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:19:26.531 /dev/nbd1 00:19:26.789 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:26.789 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:26.789 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:26.789 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:26.789 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:26.789 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:26.789 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:26.789 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.790 1+0 records in 00:19:26.790 1+0 records out 00:19:26.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000470767 s, 8.7 MB/s 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:19:26.790 /dev/nbd10 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:26.790 1+0 records in 00:19:26.790 1+0 records out 00:19:26.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349303 s, 11.7 MB/s 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:26.790 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.048 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.049 19:37:45 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.049 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.049 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.049 19:37:45 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:19:27.049 /dev/nbd11 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.049 1+0 records in 00:19:27.049 1+0 records out 00:19:27.049 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485104 s, 8.4 MB/s 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.049 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:19:27.307 /dev/nbd12 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.307 1+0 records in 00:19:27.307 1+0 records out 00:19:27.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000497363 s, 8.2 MB/s 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.307 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.308 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.308 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.308 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.308 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:19:27.566 /dev/nbd13 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:27.566 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:27.566 1+0 records in 00:19:27.566 1+0 records out 00:19:27.566 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000701887 s, 5.8 MB/s 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:27.567 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd0", 00:19:27.827 "bdev_name": "nvme0n1" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd1", 00:19:27.827 "bdev_name": "nvme0n2" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd10", 00:19:27.827 "bdev_name": "nvme0n3" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd11", 00:19:27.827 "bdev_name": "nvme1n1" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd12", 00:19:27.827 "bdev_name": "nvme2n1" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd13", 00:19:27.827 "bdev_name": "nvme3n1" 00:19:27.827 } 00:19:27.827 ]' 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd0", 00:19:27.827 "bdev_name": "nvme0n1" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd1", 00:19:27.827 "bdev_name": "nvme0n2" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd10", 00:19:27.827 "bdev_name": "nvme0n3" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd11", 00:19:27.827 "bdev_name": "nvme1n1" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd12", 00:19:27.827 "bdev_name": "nvme2n1" 00:19:27.827 }, 00:19:27.827 { 00:19:27.827 "nbd_device": "/dev/nbd13", 00:19:27.827 "bdev_name": "nvme3n1" 00:19:27.827 } 00:19:27.827 ]' 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:19:27.827 /dev/nbd1 00:19:27.827 /dev/nbd10 00:19:27.827 /dev/nbd11 00:19:27.827 /dev/nbd12 00:19:27.827 /dev/nbd13' 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:19:27.827 /dev/nbd1 00:19:27.827 /dev/nbd10 00:19:27.827 /dev/nbd11 00:19:27.827 /dev/nbd12 00:19:27.827 /dev/nbd13' 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:27.827 256+0 records in 00:19:27.827 256+0 records out 00:19:27.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00712715 s, 147 MB/s 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:27.827 256+0 records in 00:19:27.827 256+0 records out 00:19:27.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0604814 s, 17.3 MB/s 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:27.827 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:19:28.086 256+0 records in 00:19:28.086 256+0 records out 00:19:28.086 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128065 s, 8.2 MB/s 00:19:28.086 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.086 19:37:46 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:19:28.345 256+0 records in 00:19:28.345 256+0 records out 00:19:28.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.185834 s, 5.6 MB/s 00:19:28.345 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.345 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:19:28.345 256+0 records in 00:19:28.345 256+0 records out 00:19:28.345 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.214196 s, 4.9 MB/s 00:19:28.345 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.345 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:19:28.604 256+0 records in 00:19:28.604 256+0 records out 00:19:28.604 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.213575 s, 4.9 MB/s 00:19:28.604 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:28.604 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:19:28.863 256+0 records in 00:19:28.863 256+0 records out 00:19:28.863 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.248328 s, 4.2 MB/s 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:19:28.863 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:29.122 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:19:29.122 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:29.122 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:19:29.122 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:29.122 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:29.122 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.122 19:37:47 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.122 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.381 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.641 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:29.900 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:30.159 19:37:48 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.159 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:30.417 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:30.676 malloc_lvol_verify 00:19:30.676 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:30.934 08fe82f0-b38c-48c3-82e8-e3de53f33a7d 00:19:30.934 19:37:49 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:31.193 65dbac84-ee09-4cba-880d-f6cc021087e7 00:19:31.193 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:31.451 /dev/nbd0 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:31.451 mke2fs 1.47.0 (5-Feb-2023) 00:19:31.451 Discarding device blocks: 0/4096 done 00:19:31.451 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:31.451 00:19:31.451 Allocating group tables: 0/1 done 00:19:31.451 Writing inode tables: 0/1 done 00:19:31.451 Creating journal (1024 blocks): done 00:19:31.451 Writing superblocks and filesystem accounting information: 0/1 done 00:19:31.451 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:31.451 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 72584 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 72584 ']' 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 72584 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72584 00:19:31.710 killing process with pid 72584 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72584' 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 72584 00:19:31.710 19:37:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 72584 00:19:32.277 19:37:51 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:32.277 00:19:32.277 real 0m10.145s 00:19:32.277 user 0m13.861s 00:19:32.277 sys 0m3.397s 00:19:32.277 19:37:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:32.277 19:37:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:32.277 ************************************ 00:19:32.277 END TEST bdev_nbd 00:19:32.277 ************************************ 00:19:32.277 19:37:51 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:32.277 19:37:51 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:19:32.277 19:37:51 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:19:32.277 19:37:51 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:32.277 19:37:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:32.277 19:37:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.277 19:37:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:32.277 ************************************ 00:19:32.277 START TEST bdev_fio 00:19:32.277 ************************************ 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:32.277 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:32.277 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:32.536 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:32.537 ************************************ 00:19:32.537 START TEST bdev_fio_rw_verify 00:19:32.537 ************************************ 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:32.537 19:37:51 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:32.537 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:32.537 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:32.537 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:32.537 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:32.537 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:32.537 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:32.537 fio-3.35 00:19:32.537 Starting 6 threads 00:19:44.754 00:19:44.754 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=72981: Thu Dec 5 19:38:02 2024 00:19:44.754 read: IOPS=24.0k, BW=93.6MiB/s (98.2MB/s)(936MiB/10001msec) 00:19:44.754 slat (usec): min=2, max=2519, avg= 5.41, stdev=14.12 00:19:44.754 clat (usec): min=69, max=9184, avg=754.04, stdev=614.32 00:19:44.754 lat (usec): min=75, max=9193, avg=759.45, stdev=615.11 00:19:44.754 clat percentiles (usec): 00:19:44.754 | 50.000th=[ 537], 99.000th=[ 2835], 99.900th=[ 4146], 99.990th=[ 5604], 00:19:44.754 | 99.999th=[ 9110] 00:19:44.754 write: IOPS=24.4k, BW=95.1MiB/s (99.8MB/s)(951MiB/10001msec); 0 zone resets 00:19:44.754 slat (usec): min=12, max=7652, avg=31.98, stdev=102.74 00:19:44.754 clat (usec): min=61, max=8933, avg=964.09, stdev=705.57 00:19:44.754 lat (usec): min=76, max=8991, avg=996.08, stdev=719.72 00:19:44.754 clat percentiles (usec): 00:19:44.754 | 50.000th=[ 725], 99.000th=[ 3326], 99.900th=[ 4621], 99.990th=[ 6456], 00:19:44.754 | 99.999th=[ 8848] 00:19:44.754 bw ( KiB/s): min=52826, max=186060, per=100.00%, avg=98624.32, stdev=6740.47, samples=114 00:19:44.754 iops : min=13205, max=46515, avg=24654.63, stdev=1685.12, samples=114 00:19:44.754 lat (usec) : 100=0.08%, 250=10.56%, 500=26.61%, 750=20.77%, 1000=11.21% 00:19:44.754 lat (msec) : 2=23.79%, 4=6.76%, 10=0.22% 00:19:44.754 cpu : usr=42.63%, sys=32.92%, ctx=7665, majf=0, minf=21281 00:19:44.754 IO depths : 1=11.4%, 2=23.8%, 4=51.2%, 8=13.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.754 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.754 issued rwts: total=239736,243560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.754 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:44.754 00:19:44.754 Run status group 0 (all jobs): 00:19:44.754 READ: bw=93.6MiB/s (98.2MB/s), 93.6MiB/s-93.6MiB/s (98.2MB/s-98.2MB/s), io=936MiB (982MB), run=10001-10001msec 00:19:44.754 WRITE: bw=95.1MiB/s (99.8MB/s), 95.1MiB/s-95.1MiB/s (99.8MB/s-99.8MB/s), io=951MiB (998MB), run=10001-10001msec 00:19:44.754 ----------------------------------------------------- 00:19:44.754 Suppressions used: 00:19:44.754 count bytes template 00:19:44.754 6 48 /usr/src/fio/parse.c 00:19:44.754 3630 348480 /usr/src/fio/iolog.c 00:19:44.754 1 8 libtcmalloc_minimal.so 00:19:44.754 1 904 libcrypto.so 00:19:44.754 ----------------------------------------------------- 00:19:44.754 00:19:44.754 00:19:44.754 real 0m11.789s 00:19:44.754 user 0m26.987s 00:19:44.754 sys 0m20.016s 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.754 ************************************ 00:19:44.754 END TEST bdev_fio_rw_verify 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:44.754 ************************************ 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:44.754 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:44.755 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "63d6e8c1-9e86-4152-a8d0-7fb96fdf2084"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "63d6e8c1-9e86-4152-a8d0-7fb96fdf2084",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "cdd3a312-46f0-43b4-b8bd-a35da6c9985f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cdd3a312-46f0-43b4-b8bd-a35da6c9985f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "304b04cb-e734-40cf-a1c4-6ddec1d431aa"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "304b04cb-e734-40cf-a1c4-6ddec1d431aa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "290dcd6d-1359-4bd3-ad01-ad282e481185"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "290dcd6d-1359-4bd3-ad01-ad282e481185",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "311ff68b-c39c-447e-b784-6b89dcdce020"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "311ff68b-c39c-447e-b784-6b89dcdce020",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f3b591a0-0571-42ec-9ac9-19eb617bf609"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "f3b591a0-0571-42ec-9ac9-19eb617bf609",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:19:44.755 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:44.755 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:44.755 /home/vagrant/spdk_repo/spdk 00:19:44.755 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:44.755 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:44.755 19:38:03 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:44.755 00:19:44.755 real 0m11.943s 00:19:44.755 user 0m27.065s 00:19:44.755 sys 0m20.077s 00:19:44.755 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.755 ************************************ 00:19:44.755 END TEST bdev_fio 00:19:44.755 ************************************ 00:19:44.755 19:38:03 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:44.755 19:38:03 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:44.755 19:38:03 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:44.755 19:38:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:44.755 19:38:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.755 19:38:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:44.755 ************************************ 00:19:44.755 START TEST bdev_verify 00:19:44.755 ************************************ 00:19:44.755 19:38:03 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:44.755 [2024-12-05 19:38:03.326405] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:19:44.755 [2024-12-05 19:38:03.326521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73155 ] 00:19:44.755 [2024-12-05 19:38:03.485648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:44.755 [2024-12-05 19:38:03.582981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.755 [2024-12-05 19:38:03.583060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.014 Running I/O for 5 seconds... 00:19:47.372 24160.00 IOPS, 94.38 MiB/s [2024-12-05T19:38:07.314Z] 23600.00 IOPS, 92.19 MiB/s [2024-12-05T19:38:08.250Z] 24351.67 IOPS, 95.12 MiB/s [2024-12-05T19:38:09.184Z] 24335.75 IOPS, 95.06 MiB/s [2024-12-05T19:38:09.184Z] 24184.80 IOPS, 94.47 MiB/s 00:19:50.178 Latency(us) 00:19:50.178 [2024-12-05T19:38:09.184Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.178 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x0 length 0x80000 00:19:50.178 nvme0n1 : 5.08 1941.67 7.58 0.00 0.00 65796.59 5646.18 63317.86 00:19:50.178 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x80000 length 0x80000 00:19:50.178 nvme0n1 : 5.07 1894.99 7.40 0.00 0.00 67407.55 5041.23 69367.34 00:19:50.178 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x0 length 0x80000 00:19:50.178 nvme0n2 : 5.06 1920.93 7.50 0.00 0.00 66372.68 7461.02 62511.26 00:19:50.178 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x80000 length 0x80000 00:19:50.178 nvme0n2 : 5.08 1888.82 7.38 0.00 0.00 67484.05 5041.23 64931.05 00:19:50.178 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x0 length 0x80000 00:19:50.178 nvme0n3 : 5.05 1900.48 7.42 0.00 0.00 66956.71 7864.32 57671.68 00:19:50.178 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x80000 length 0x80000 00:19:50.178 nvme0n3 : 5.05 1875.03 7.32 0.00 0.00 67838.38 8973.39 65737.65 00:19:50.178 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x0 length 0xa0000 00:19:50.178 nvme1n1 : 5.07 1895.06 7.40 0.00 0.00 67010.00 4940.41 63721.16 00:19:50.178 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0xa0000 length 0xa0000 00:19:50.178 nvme1n1 : 5.09 1861.76 7.27 0.00 0.00 68180.13 10284.11 70577.23 00:19:50.178 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x0 length 0x20000 00:19:50.178 nvme2n1 : 5.08 1914.55 7.48 0.00 0.00 66199.40 5973.86 61704.66 00:19:50.178 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x20000 length 0x20000 00:19:50.178 nvme2n1 : 5.07 1866.85 7.29 0.00 0.00 67850.35 6301.54 77433.30 00:19:50.178 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0x0 length 0xbd0bd 00:19:50.178 nvme3n1 : 5.08 2466.77 9.64 0.00 0.00 51198.06 4612.73 55251.89 00:19:50.178 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.178 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:50.178 nvme3n1 : 5.09 2432.72 9.50 0.00 0.00 51865.62 3806.13 63721.16 00:19:50.178 [2024-12-05T19:38:09.184Z] =================================================================================================================== 00:19:50.178 [2024-12-05T19:38:09.184Z] Total : 23859.61 93.20 0.00 0.00 63894.55 3806.13 77433.30 00:19:51.114 ************************************ 00:19:51.114 END TEST bdev_verify 00:19:51.114 ************************************ 00:19:51.114 00:19:51.114 real 0m6.582s 00:19:51.114 user 0m10.882s 00:19:51.114 sys 0m1.273s 00:19:51.114 19:38:09 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:51.114 19:38:09 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:51.114 19:38:09 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:51.114 19:38:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:51.114 19:38:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:51.114 19:38:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:51.114 ************************************ 00:19:51.114 START TEST bdev_verify_big_io 00:19:51.114 ************************************ 00:19:51.114 19:38:09 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:51.114 [2024-12-05 19:38:09.963582] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:19:51.114 [2024-12-05 19:38:09.963694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73259 ] 00:19:51.372 [2024-12-05 19:38:10.124710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:51.372 [2024-12-05 19:38:10.223452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.372 [2024-12-05 19:38:10.223528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:51.938 Running I/O for 5 seconds... 00:19:58.035 1524.00 IOPS, 95.25 MiB/s [2024-12-05T19:38:17.041Z] 2867.00 IOPS, 179.19 MiB/s 00:19:58.035 Latency(us) 00:19:58.035 [2024-12-05T19:38:17.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:58.035 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x0 length 0x8000 00:19:58.035 nvme0n1 : 5.97 83.10 5.19 0.00 0.00 1437380.11 129055.51 2374621.34 00:19:58.035 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x8000 length 0x8000 00:19:58.035 nvme0n1 : 5.99 106.82 6.68 0.00 0.00 1146242.84 7108.14 1329271.73 00:19:58.035 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x0 length 0x8000 00:19:58.035 nvme0n2 : 5.99 112.20 7.01 0.00 0.00 1083872.98 6805.66 1077613.49 00:19:58.035 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x8000 length 0x8000 00:19:58.035 nvme0n2 : 5.99 74.74 4.67 0.00 0.00 1605633.80 75013.51 2555299.05 00:19:58.035 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x0 length 0x8000 00:19:58.035 nvme0n3 : 5.99 122.96 7.68 0.00 0.00 957666.99 13208.02 1238932.87 00:19:58.035 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x8000 length 0x8000 00:19:58.035 nvme0n3 : 6.04 103.36 6.46 0.00 0.00 1111204.89 31457.28 1922927.06 00:19:58.035 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x0 length 0xa000 00:19:58.035 nvme1n1 : 5.98 109.68 6.86 0.00 0.00 1036813.26 25407.80 1858399.31 00:19:58.035 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0xa000 length 0xa000 00:19:58.035 nvme1n1 : 5.97 91.08 5.69 0.00 0.00 1215385.04 7007.31 2039077.02 00:19:58.035 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x0 length 0x2000 00:19:58.035 nvme2n1 : 5.99 90.85 5.68 0.00 0.00 1209918.83 12804.73 3329632.10 00:19:58.035 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x2000 length 0x2000 00:19:58.035 nvme2n1 : 6.03 124.64 7.79 0.00 0.00 844973.16 14821.22 1522854.99 00:19:58.035 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0x0 length 0xbd0b 00:19:58.035 nvme3n1 : 5.99 168.93 10.56 0.00 0.00 626385.70 5595.77 1006632.96 00:19:58.035 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:58.035 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:58.035 nvme3n1 : 6.17 207.59 12.97 0.00 0.00 493379.19 450.56 935652.43 00:19:58.035 [2024-12-05T19:38:17.041Z] =================================================================================================================== 00:19:58.035 [2024-12-05T19:38:17.041Z] Total : 1395.94 87.25 0.00 0.00 974871.44 450.56 3329632.10 00:19:58.969 ************************************ 00:19:58.969 END TEST bdev_verify_big_io 00:19:58.969 ************************************ 00:19:58.969 00:19:58.969 real 0m7.860s 00:19:58.969 user 0m14.585s 00:19:58.969 sys 0m0.353s 00:19:58.969 19:38:17 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:58.969 19:38:17 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:58.969 19:38:17 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:58.969 19:38:17 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:58.969 19:38:17 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:58.969 19:38:17 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:58.969 ************************************ 00:19:58.969 START TEST bdev_write_zeroes 00:19:58.969 ************************************ 00:19:58.969 19:38:17 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:58.969 [2024-12-05 19:38:17.881164] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:19:58.969 [2024-12-05 19:38:17.881272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73368 ] 00:19:59.227 [2024-12-05 19:38:18.037263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.227 [2024-12-05 19:38:18.132890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.793 Running I/O for 1 seconds... 00:20:00.726 73536.00 IOPS, 287.25 MiB/s 00:20:00.726 Latency(us) 00:20:00.726 [2024-12-05T19:38:19.732Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.726 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.726 nvme0n1 : 1.02 11800.89 46.10 0.00 0.00 10836.08 4209.43 23290.49 00:20:00.726 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.726 nvme0n2 : 1.02 11787.55 46.05 0.00 0.00 10840.07 4209.43 22786.36 00:20:00.726 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.726 nvme0n3 : 1.03 11735.00 45.84 0.00 0.00 10880.72 4385.87 22383.06 00:20:00.726 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.726 nvme1n1 : 1.01 11741.60 45.87 0.00 0.00 10866.83 4486.70 21979.77 00:20:00.726 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.726 nvme2n1 : 1.03 11716.92 45.77 0.00 0.00 10882.52 4587.52 21677.29 00:20:00.726 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:00.726 nvme3n1 : 1.02 14186.50 55.42 0.00 0.00 8976.47 3705.30 20870.70 00:20:00.726 [2024-12-05T19:38:19.732Z] =================================================================================================================== 00:20:00.726 [2024-12-05T19:38:19.732Z] Total : 72968.46 285.03 0.00 0.00 10493.98 3705.30 23290.49 00:20:01.292 00:20:01.292 real 0m2.448s 00:20:01.292 user 0m1.793s 00:20:01.292 sys 0m0.463s 00:20:01.292 ************************************ 00:20:01.292 END TEST bdev_write_zeroes 00:20:01.292 ************************************ 00:20:01.292 19:38:20 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.292 19:38:20 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:01.550 19:38:20 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.550 19:38:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:01.550 19:38:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.550 19:38:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:01.550 ************************************ 00:20:01.550 START TEST bdev_json_nonenclosed 00:20:01.550 ************************************ 00:20:01.550 19:38:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:01.550 [2024-12-05 19:38:20.396442] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:20:01.550 [2024-12-05 19:38:20.396545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73416 ] 00:20:01.808 [2024-12-05 19:38:20.557225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.808 [2024-12-05 19:38:20.652294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.808 [2024-12-05 19:38:20.652361] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:01.809 [2024-12-05 19:38:20.652376] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:01.809 [2024-12-05 19:38:20.652386] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:02.067 00:20:02.067 real 0m0.496s 00:20:02.067 user 0m0.299s 00:20:02.067 sys 0m0.093s 00:20:02.067 ************************************ 00:20:02.067 END TEST bdev_json_nonenclosed 00:20:02.067 ************************************ 00:20:02.067 19:38:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.067 19:38:20 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:02.067 19:38:20 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.067 19:38:20 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:20:02.067 19:38:20 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:02.067 19:38:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:02.067 ************************************ 00:20:02.067 START TEST bdev_json_nonarray 00:20:02.067 ************************************ 00:20:02.067 19:38:20 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:02.067 [2024-12-05 19:38:20.953372] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:20:02.067 [2024-12-05 19:38:20.953483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73436 ] 00:20:02.326 [2024-12-05 19:38:21.109570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.326 [2024-12-05 19:38:21.205596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.326 [2024-12-05 19:38:21.205670] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:02.326 [2024-12-05 19:38:21.205687] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:02.326 [2024-12-05 19:38:21.205695] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:02.584 00:20:02.584 real 0m0.494s 00:20:02.584 user 0m0.286s 00:20:02.584 sys 0m0.103s 00:20:02.584 19:38:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:02.584 ************************************ 00:20:02.584 END TEST bdev_json_nonarray 00:20:02.584 ************************************ 00:20:02.584 19:38:21 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:20:02.584 19:38:21 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:03.150 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:18.022 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:18.956 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:18.956 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:18.956 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:18.956 00:20:18.956 real 1m4.451s 00:20:18.956 user 1m21.270s 00:20:18.956 sys 1m0.022s 00:20:18.956 19:38:37 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:18.956 ************************************ 00:20:18.956 END TEST blockdev_xnvme 00:20:18.956 ************************************ 00:20:18.956 19:38:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:20:18.956 19:38:37 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:18.956 19:38:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:18.956 19:38:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:18.956 19:38:37 -- common/autotest_common.sh@10 -- # set +x 00:20:18.956 ************************************ 00:20:18.956 START TEST ublk 00:20:18.956 ************************************ 00:20:18.956 19:38:37 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:20:19.214 * Looking for test storage... 00:20:19.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:19.214 19:38:38 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:19.214 19:38:38 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:20:19.214 19:38:38 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:19.214 19:38:38 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:19.214 19:38:38 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:19.214 19:38:38 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:19.214 19:38:38 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:19.214 19:38:38 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:20:19.214 19:38:38 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:20:19.214 19:38:38 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:20:19.214 19:38:38 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:20:19.214 19:38:38 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:20:19.214 19:38:38 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:20:19.214 19:38:38 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:20:19.214 19:38:38 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:19.214 19:38:38 ublk -- scripts/common.sh@344 -- # case "$op" in 00:20:19.214 19:38:38 ublk -- scripts/common.sh@345 -- # : 1 00:20:19.214 19:38:38 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:19.214 19:38:38 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:19.214 19:38:38 ublk -- scripts/common.sh@365 -- # decimal 1 00:20:19.215 19:38:38 ublk -- scripts/common.sh@353 -- # local d=1 00:20:19.215 19:38:38 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:19.215 19:38:38 ublk -- scripts/common.sh@355 -- # echo 1 00:20:19.215 19:38:38 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:20:19.215 19:38:38 ublk -- scripts/common.sh@366 -- # decimal 2 00:20:19.215 19:38:38 ublk -- scripts/common.sh@353 -- # local d=2 00:20:19.215 19:38:38 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:19.215 19:38:38 ublk -- scripts/common.sh@355 -- # echo 2 00:20:19.215 19:38:38 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:20:19.215 19:38:38 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:19.215 19:38:38 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:19.215 19:38:38 ublk -- scripts/common.sh@368 -- # return 0 00:20:19.215 19:38:38 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:19.215 19:38:38 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:19.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.215 --rc genhtml_branch_coverage=1 00:20:19.215 --rc genhtml_function_coverage=1 00:20:19.215 --rc genhtml_legend=1 00:20:19.215 --rc geninfo_all_blocks=1 00:20:19.215 --rc geninfo_unexecuted_blocks=1 00:20:19.215 00:20:19.215 ' 00:20:19.215 19:38:38 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:19.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.215 --rc genhtml_branch_coverage=1 00:20:19.215 --rc genhtml_function_coverage=1 00:20:19.215 --rc genhtml_legend=1 00:20:19.215 --rc geninfo_all_blocks=1 00:20:19.215 --rc geninfo_unexecuted_blocks=1 00:20:19.215 00:20:19.215 ' 00:20:19.215 19:38:38 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:19.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.215 --rc genhtml_branch_coverage=1 00:20:19.215 --rc genhtml_function_coverage=1 00:20:19.215 --rc genhtml_legend=1 00:20:19.215 --rc geninfo_all_blocks=1 00:20:19.215 --rc geninfo_unexecuted_blocks=1 00:20:19.215 00:20:19.215 ' 00:20:19.215 19:38:38 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:19.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:19.215 --rc genhtml_branch_coverage=1 00:20:19.215 --rc genhtml_function_coverage=1 00:20:19.215 --rc genhtml_legend=1 00:20:19.215 --rc geninfo_all_blocks=1 00:20:19.215 --rc geninfo_unexecuted_blocks=1 00:20:19.215 00:20:19.215 ' 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:19.215 19:38:38 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:19.215 19:38:38 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:19.215 19:38:38 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:19.215 19:38:38 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:19.215 19:38:38 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:19.215 19:38:38 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:19.215 19:38:38 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:19.215 19:38:38 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:20:19.215 19:38:38 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:20:19.215 19:38:38 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:19.215 19:38:38 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:19.215 19:38:38 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 ************************************ 00:20:19.215 START TEST test_save_ublk_config 00:20:19.215 ************************************ 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=73745 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 73745 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73745 ']' 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:19.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:19.215 19:38:38 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:19.215 [2024-12-05 19:38:38.188021] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:20:19.215 [2024-12-05 19:38:38.188153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73745 ] 00:20:19.472 [2024-12-05 19:38:38.339226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.472 [2024-12-05 19:38:38.432001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.048 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:20.048 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:20.048 19:38:39 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:20:20.048 19:38:39 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:20:20.048 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.048 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:20.048 [2024-12-05 19:38:39.044149] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:20.048 [2024-12-05 19:38:39.044974] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:20.306 malloc0 00:20:20.306 [2024-12-05 19:38:39.108256] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:20.306 [2024-12-05 19:38:39.108329] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:20.306 [2024-12-05 19:38:39.108339] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:20.306 [2024-12-05 19:38:39.108346] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:20.306 [2024-12-05 19:38:39.116267] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:20.306 [2024-12-05 19:38:39.116289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:20.306 [2024-12-05 19:38:39.124152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:20.306 [2024-12-05 19:38:39.124248] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:20.306 [2024-12-05 19:38:39.141150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:20.306 0 00:20:20.306 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.306 19:38:39 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:20:20.306 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.306 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:20.565 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.565 19:38:39 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:20:20.565 "subsystems": [ 00:20:20.565 { 00:20:20.565 "subsystem": "fsdev", 00:20:20.565 "config": [ 00:20:20.565 { 00:20:20.565 "method": "fsdev_set_opts", 00:20:20.565 "params": { 00:20:20.565 "fsdev_io_pool_size": 65535, 00:20:20.565 "fsdev_io_cache_size": 256 00:20:20.565 } 00:20:20.565 } 00:20:20.565 ] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "keyring", 00:20:20.565 "config": [] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "iobuf", 00:20:20.565 "config": [ 00:20:20.565 { 00:20:20.565 "method": "iobuf_set_options", 00:20:20.565 "params": { 00:20:20.565 "small_pool_count": 8192, 00:20:20.565 "large_pool_count": 1024, 00:20:20.565 "small_bufsize": 8192, 00:20:20.565 "large_bufsize": 135168, 00:20:20.565 "enable_numa": false 00:20:20.565 } 00:20:20.565 } 00:20:20.565 ] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "sock", 00:20:20.565 "config": [ 00:20:20.565 { 00:20:20.565 "method": "sock_set_default_impl", 00:20:20.565 "params": { 00:20:20.565 "impl_name": "posix" 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "method": "sock_impl_set_options", 00:20:20.565 "params": { 00:20:20.565 "impl_name": "ssl", 00:20:20.565 "recv_buf_size": 4096, 00:20:20.565 "send_buf_size": 4096, 00:20:20.565 "enable_recv_pipe": true, 00:20:20.565 "enable_quickack": false, 00:20:20.565 "enable_placement_id": 0, 00:20:20.565 "enable_zerocopy_send_server": true, 00:20:20.565 "enable_zerocopy_send_client": false, 00:20:20.565 "zerocopy_threshold": 0, 00:20:20.565 "tls_version": 0, 00:20:20.565 "enable_ktls": false 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "method": "sock_impl_set_options", 00:20:20.565 "params": { 00:20:20.565 "impl_name": "posix", 00:20:20.565 "recv_buf_size": 2097152, 00:20:20.565 "send_buf_size": 2097152, 00:20:20.565 "enable_recv_pipe": true, 00:20:20.565 "enable_quickack": false, 00:20:20.565 "enable_placement_id": 0, 00:20:20.565 "enable_zerocopy_send_server": true, 00:20:20.565 "enable_zerocopy_send_client": false, 00:20:20.565 "zerocopy_threshold": 0, 00:20:20.565 "tls_version": 0, 00:20:20.565 "enable_ktls": false 00:20:20.565 } 00:20:20.565 } 00:20:20.565 ] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "vmd", 00:20:20.565 "config": [] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "accel", 00:20:20.565 "config": [ 00:20:20.565 { 00:20:20.565 "method": "accel_set_options", 00:20:20.565 "params": { 00:20:20.565 "small_cache_size": 128, 00:20:20.565 "large_cache_size": 16, 00:20:20.565 "task_count": 2048, 00:20:20.565 "sequence_count": 2048, 00:20:20.565 "buf_count": 2048 00:20:20.565 } 00:20:20.565 } 00:20:20.565 ] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "bdev", 00:20:20.565 "config": [ 00:20:20.565 { 00:20:20.565 "method": "bdev_set_options", 00:20:20.565 "params": { 00:20:20.565 "bdev_io_pool_size": 65535, 00:20:20.565 "bdev_io_cache_size": 256, 00:20:20.565 "bdev_auto_examine": true, 00:20:20.565 "iobuf_small_cache_size": 128, 00:20:20.565 "iobuf_large_cache_size": 16 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "method": "bdev_raid_set_options", 00:20:20.565 "params": { 00:20:20.565 "process_window_size_kb": 1024, 00:20:20.565 "process_max_bandwidth_mb_sec": 0 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "method": "bdev_iscsi_set_options", 00:20:20.565 "params": { 00:20:20.565 "timeout_sec": 30 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "method": "bdev_nvme_set_options", 00:20:20.565 "params": { 00:20:20.565 "action_on_timeout": "none", 00:20:20.565 "timeout_us": 0, 00:20:20.565 "timeout_admin_us": 0, 00:20:20.565 "keep_alive_timeout_ms": 10000, 00:20:20.565 "arbitration_burst": 0, 00:20:20.565 "low_priority_weight": 0, 00:20:20.565 "medium_priority_weight": 0, 00:20:20.565 "high_priority_weight": 0, 00:20:20.565 "nvme_adminq_poll_period_us": 10000, 00:20:20.565 "nvme_ioq_poll_period_us": 0, 00:20:20.565 "io_queue_requests": 0, 00:20:20.565 "delay_cmd_submit": true, 00:20:20.565 "transport_retry_count": 4, 00:20:20.565 "bdev_retry_count": 3, 00:20:20.565 "transport_ack_timeout": 0, 00:20:20.565 "ctrlr_loss_timeout_sec": 0, 00:20:20.565 "reconnect_delay_sec": 0, 00:20:20.565 "fast_io_fail_timeout_sec": 0, 00:20:20.565 "disable_auto_failback": false, 00:20:20.565 "generate_uuids": false, 00:20:20.565 "transport_tos": 0, 00:20:20.565 "nvme_error_stat": false, 00:20:20.565 "rdma_srq_size": 0, 00:20:20.565 "io_path_stat": false, 00:20:20.565 "allow_accel_sequence": false, 00:20:20.565 "rdma_max_cq_size": 0, 00:20:20.565 "rdma_cm_event_timeout_ms": 0, 00:20:20.565 "dhchap_digests": [ 00:20:20.565 "sha256", 00:20:20.565 "sha384", 00:20:20.565 "sha512" 00:20:20.565 ], 00:20:20.565 "dhchap_dhgroups": [ 00:20:20.565 "null", 00:20:20.565 "ffdhe2048", 00:20:20.565 "ffdhe3072", 00:20:20.565 "ffdhe4096", 00:20:20.565 "ffdhe6144", 00:20:20.565 "ffdhe8192" 00:20:20.565 ] 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "method": "bdev_nvme_set_hotplug", 00:20:20.565 "params": { 00:20:20.565 "period_us": 100000, 00:20:20.565 "enable": false 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "method": "bdev_malloc_create", 00:20:20.565 "params": { 00:20:20.565 "name": "malloc0", 00:20:20.565 "num_blocks": 8192, 00:20:20.565 "block_size": 4096, 00:20:20.565 "physical_block_size": 4096, 00:20:20.565 "uuid": "e2cc0cae-b0e1-46ca-9047-460ecd7e0528", 00:20:20.565 "optimal_io_boundary": 0, 00:20:20.565 "md_size": 0, 00:20:20.565 "dif_type": 0, 00:20:20.565 "dif_is_head_of_md": false, 00:20:20.565 "dif_pi_format": 0 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "method": "bdev_wait_for_examine" 00:20:20.565 } 00:20:20.565 ] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "scsi", 00:20:20.565 "config": null 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "scheduler", 00:20:20.565 "config": [ 00:20:20.565 { 00:20:20.565 "method": "framework_set_scheduler", 00:20:20.565 "params": { 00:20:20.565 "name": "static" 00:20:20.565 } 00:20:20.565 } 00:20:20.565 ] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "vhost_scsi", 00:20:20.565 "config": [] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "vhost_blk", 00:20:20.565 "config": [] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "ublk", 00:20:20.565 "config": [ 00:20:20.565 { 00:20:20.565 "method": "ublk_create_target", 00:20:20.565 "params": { 00:20:20.565 "cpumask": "1" 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "method": "ublk_start_disk", 00:20:20.565 "params": { 00:20:20.565 "bdev_name": "malloc0", 00:20:20.565 "ublk_id": 0, 00:20:20.565 "num_queues": 1, 00:20:20.565 "queue_depth": 128 00:20:20.565 } 00:20:20.565 } 00:20:20.565 ] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "nbd", 00:20:20.565 "config": [] 00:20:20.565 }, 00:20:20.565 { 00:20:20.565 "subsystem": "nvmf", 00:20:20.565 "config": [ 00:20:20.565 { 00:20:20.565 "method": "nvmf_set_config", 00:20:20.565 "params": { 00:20:20.565 "discovery_filter": "match_any", 00:20:20.565 "admin_cmd_passthru": { 00:20:20.565 "identify_ctrlr": false 00:20:20.565 }, 00:20:20.565 "dhchap_digests": [ 00:20:20.565 "sha256", 00:20:20.565 "sha384", 00:20:20.565 "sha512" 00:20:20.565 ], 00:20:20.565 "dhchap_dhgroups": [ 00:20:20.565 "null", 00:20:20.565 "ffdhe2048", 00:20:20.565 "ffdhe3072", 00:20:20.565 "ffdhe4096", 00:20:20.565 "ffdhe6144", 00:20:20.565 "ffdhe8192" 00:20:20.565 ] 00:20:20.565 } 00:20:20.565 }, 00:20:20.565 { 00:20:20.566 "method": "nvmf_set_max_subsystems", 00:20:20.566 "params": { 00:20:20.566 "max_subsystems": 1024 00:20:20.566 } 00:20:20.566 }, 00:20:20.566 { 00:20:20.566 "method": "nvmf_set_crdt", 00:20:20.566 "params": { 00:20:20.566 "crdt1": 0, 00:20:20.566 "crdt2": 0, 00:20:20.566 "crdt3": 0 00:20:20.566 } 00:20:20.566 } 00:20:20.566 ] 00:20:20.566 }, 00:20:20.566 { 00:20:20.566 "subsystem": "iscsi", 00:20:20.566 "config": [ 00:20:20.566 { 00:20:20.566 "method": "iscsi_set_options", 00:20:20.566 "params": { 00:20:20.566 "node_base": "iqn.2016-06.io.spdk", 00:20:20.566 "max_sessions": 128, 00:20:20.566 "max_connections_per_session": 2, 00:20:20.566 "max_queue_depth": 64, 00:20:20.566 "default_time2wait": 2, 00:20:20.566 "default_time2retain": 20, 00:20:20.566 "first_burst_length": 8192, 00:20:20.566 "immediate_data": true, 00:20:20.566 "allow_duplicated_isid": false, 00:20:20.566 "error_recovery_level": 0, 00:20:20.566 "nop_timeout": 60, 00:20:20.566 "nop_in_interval": 30, 00:20:20.566 "disable_chap": false, 00:20:20.566 "require_chap": false, 00:20:20.566 "mutual_chap": false, 00:20:20.566 "chap_group": 0, 00:20:20.566 "max_large_datain_per_connection": 64, 00:20:20.566 "max_r2t_per_connection": 4, 00:20:20.566 "pdu_pool_size": 36864, 00:20:20.566 "immediate_data_pool_size": 16384, 00:20:20.566 "data_out_pool_size": 2048 00:20:20.566 } 00:20:20.566 } 00:20:20.566 ] 00:20:20.566 } 00:20:20.566 ] 00:20:20.566 }' 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 73745 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73745 ']' 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73745 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73745 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.566 killing process with pid 73745 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73745' 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73745 00:20:20.566 19:38:39 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73745 00:20:21.500 [2024-12-05 19:38:40.487960] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:21.757 [2024-12-05 19:38:40.523173] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:21.757 [2024-12-05 19:38:40.523291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:21.757 [2024-12-05 19:38:40.531155] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:21.757 [2024-12-05 19:38:40.531203] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:21.757 [2024-12-05 19:38:40.531215] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:21.757 [2024-12-05 19:38:40.531238] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:21.757 [2024-12-05 19:38:40.531378] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:23.131 19:38:41 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=73804 00:20:23.131 19:38:41 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 73804 00:20:23.131 19:38:41 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 73804 ']' 00:20:23.131 19:38:41 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.131 19:38:41 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:23.131 19:38:41 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:20:23.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.131 19:38:41 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.131 19:38:41 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:20:23.131 "subsystems": [ 00:20:23.131 { 00:20:23.131 "subsystem": "fsdev", 00:20:23.131 "config": [ 00:20:23.131 { 00:20:23.131 "method": "fsdev_set_opts", 00:20:23.131 "params": { 00:20:23.131 "fsdev_io_pool_size": 65535, 00:20:23.131 "fsdev_io_cache_size": 256 00:20:23.131 } 00:20:23.131 } 00:20:23.131 ] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "keyring", 00:20:23.131 "config": [] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "iobuf", 00:20:23.131 "config": [ 00:20:23.131 { 00:20:23.131 "method": "iobuf_set_options", 00:20:23.131 "params": { 00:20:23.131 "small_pool_count": 8192, 00:20:23.131 "large_pool_count": 1024, 00:20:23.131 "small_bufsize": 8192, 00:20:23.131 "large_bufsize": 135168, 00:20:23.131 "enable_numa": false 00:20:23.131 } 00:20:23.131 } 00:20:23.131 ] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "sock", 00:20:23.131 "config": [ 00:20:23.131 { 00:20:23.131 "method": "sock_set_default_impl", 00:20:23.131 "params": { 00:20:23.131 "impl_name": "posix" 00:20:23.131 } 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "method": "sock_impl_set_options", 00:20:23.131 "params": { 00:20:23.131 "impl_name": "ssl", 00:20:23.131 "recv_buf_size": 4096, 00:20:23.131 "send_buf_size": 4096, 00:20:23.131 "enable_recv_pipe": true, 00:20:23.131 "enable_quickack": false, 00:20:23.131 "enable_placement_id": 0, 00:20:23.131 "enable_zerocopy_send_server": true, 00:20:23.131 "enable_zerocopy_send_client": false, 00:20:23.131 "zerocopy_threshold": 0, 00:20:23.131 "tls_version": 0, 00:20:23.131 "enable_ktls": false 00:20:23.131 } 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "method": "sock_impl_set_options", 00:20:23.131 "params": { 00:20:23.131 "impl_name": "posix", 00:20:23.131 "recv_buf_size": 2097152, 00:20:23.131 "send_buf_size": 2097152, 00:20:23.131 "enable_recv_pipe": true, 00:20:23.131 "enable_quickack": false, 00:20:23.131 "enable_placement_id": 0, 00:20:23.131 "enable_zerocopy_send_server": true, 00:20:23.131 "enable_zerocopy_send_client": false, 00:20:23.131 "zerocopy_threshold": 0, 00:20:23.131 "tls_version": 0, 00:20:23.131 "enable_ktls": false 00:20:23.131 } 00:20:23.131 } 00:20:23.131 ] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "vmd", 00:20:23.131 "config": [] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "accel", 00:20:23.131 "config": [ 00:20:23.131 { 00:20:23.131 "method": "accel_set_options", 00:20:23.131 "params": { 00:20:23.131 "small_cache_size": 128, 00:20:23.131 "large_cache_size": 16, 00:20:23.131 "task_count": 2048, 00:20:23.131 "sequence_count": 2048, 00:20:23.131 "buf_count": 2048 00:20:23.131 } 00:20:23.131 } 00:20:23.131 ] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "bdev", 00:20:23.131 "config": [ 00:20:23.131 { 00:20:23.131 "method": "bdev_set_options", 00:20:23.131 "params": { 00:20:23.131 "bdev_io_pool_size": 65535, 00:20:23.131 "bdev_io_cache_size": 256, 00:20:23.131 "bdev_auto_examine": true, 00:20:23.131 "iobuf_small_cache_size": 128, 00:20:23.131 "iobuf_large_cache_size": 16 00:20:23.131 } 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "method": "bdev_raid_set_options", 00:20:23.131 "params": { 00:20:23.131 "process_window_size_kb": 1024, 00:20:23.131 "process_max_bandwidth_mb_sec": 0 00:20:23.131 } 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "method": "bdev_iscsi_set_options", 00:20:23.131 "params": { 00:20:23.131 "timeout_sec": 30 00:20:23.131 } 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "method": "bdev_nvme_set_options", 00:20:23.131 "params": { 00:20:23.131 "action_on_timeout": "none", 00:20:23.131 "timeout_us": 0, 00:20:23.131 "timeout_admin_us": 0, 00:20:23.131 "keep_alive_timeout_ms": 10000, 00:20:23.131 "arbitration_burst": 0, 00:20:23.131 "low_priority_weight": 0, 00:20:23.131 "medium_priority_weight": 0, 00:20:23.131 "high_priority_weight": 0, 00:20:23.131 "nvme_adminq_poll_period_us": 10000, 00:20:23.131 "nvme_ioq_poll_period_us": 0, 00:20:23.131 "io_queue_requests": 0, 00:20:23.131 "delay_cmd_submit": true, 00:20:23.131 "transport_retry_count": 4, 00:20:23.131 "bdev_retry_count": 3, 00:20:23.131 "transport_ack_timeout": 0, 00:20:23.131 "ctrlr_loss_timeout_sec": 0, 00:20:23.131 "reconnect_delay_sec": 0, 00:20:23.131 "fast_io_fail_timeout_sec": 0, 00:20:23.131 "disable_auto_failback": false, 00:20:23.131 "generate_uuids": false, 00:20:23.131 "transport_tos": 0, 00:20:23.131 "nvme_error_stat": false, 00:20:23.131 "rdma_srq_size": 0, 00:20:23.131 "io_path_stat": false, 00:20:23.131 "allow_accel_sequence": false, 00:20:23.131 "rdma_max_cq_size": 0, 00:20:23.131 "rdma_cm_event_timeout_ms": 0, 00:20:23.131 "dhchap_digests": [ 00:20:23.131 "sha256", 00:20:23.131 "sha384", 00:20:23.131 "sha512" 00:20:23.131 ], 00:20:23.131 "dhchap_dhgroups": [ 00:20:23.131 "null", 00:20:23.131 "ffdhe2048", 00:20:23.131 "ffdhe3072", 00:20:23.131 "ffdhe4096", 00:20:23.131 "ffdhe6144", 00:20:23.131 "ffdhe8192" 00:20:23.131 ] 00:20:23.131 } 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "method": "bdev_nvme_set_hotplug", 00:20:23.131 "params": { 00:20:23.131 "period_us": 100000, 00:20:23.131 "enable": false 00:20:23.131 } 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "method": "bdev_malloc_create", 00:20:23.131 "params": { 00:20:23.131 "name": "malloc0", 00:20:23.131 "num_blocks": 8192, 00:20:23.131 "block_size": 4096, 00:20:23.131 "physical_block_size": 4096, 00:20:23.131 "uuid": "e2cc0cae-b0e1-46ca-9047-460ecd7e0528", 00:20:23.131 "optimal_io_boundary": 0, 00:20:23.131 "md_size": 0, 00:20:23.131 "dif_type": 0, 00:20:23.131 "dif_is_head_of_md": false, 00:20:23.131 "dif_pi_format": 0 00:20:23.131 } 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "method": "bdev_wait_for_examine" 00:20:23.131 } 00:20:23.131 ] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "scsi", 00:20:23.131 "config": null 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "scheduler", 00:20:23.131 "config": [ 00:20:23.131 { 00:20:23.131 "method": "framework_set_scheduler", 00:20:23.131 "params": { 00:20:23.131 "name": "static" 00:20:23.131 } 00:20:23.131 } 00:20:23.131 ] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "vhost_scsi", 00:20:23.131 "config": [] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "vhost_blk", 00:20:23.131 "config": [] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "ublk", 00:20:23.131 "config": [ 00:20:23.131 { 00:20:23.131 "method": "ublk_create_target", 00:20:23.131 "params": { 00:20:23.131 "cpumask": "1" 00:20:23.131 } 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "method": "ublk_start_disk", 00:20:23.131 "params": { 00:20:23.131 "bdev_name": "malloc0", 00:20:23.131 "ublk_id": 0, 00:20:23.131 "num_queues": 1, 00:20:23.131 "queue_depth": 128 00:20:23.131 } 00:20:23.131 } 00:20:23.131 ] 00:20:23.131 }, 00:20:23.131 { 00:20:23.131 "subsystem": "nbd", 00:20:23.131 "config": [] 00:20:23.131 }, 00:20:23.131 { 00:20:23.132 "subsystem": "nvmf", 00:20:23.132 "config": [ 00:20:23.132 { 00:20:23.132 "method": "nvmf_set_config", 00:20:23.132 "params": { 00:20:23.132 "discovery_filter": "match_any", 00:20:23.132 "admin_cmd_passthru": { 00:20:23.132 "identify_ctrlr": false 00:20:23.132 }, 00:20:23.132 "dhchap_digests": [ 00:20:23.132 "sha256", 00:20:23.132 "sha384", 00:20:23.132 "sha512" 00:20:23.132 ], 00:20:23.132 "dhchap_dhgroups": [ 00:20:23.132 "null", 00:20:23.132 "ffdhe2048", 00:20:23.132 "ffdhe3072", 00:20:23.132 "ffdhe4096", 00:20:23.132 "ffdhe6144", 00:20:23.132 "ffdhe8192" 00:20:23.132 ] 00:20:23.132 } 00:20:23.132 }, 00:20:23.132 { 00:20:23.132 "method": "nvmf_set_max_subsystems", 00:20:23.132 "params": { 00:20:23.132 "max_subsystems": 1024 00:20:23.132 } 00:20:23.132 }, 00:20:23.132 { 00:20:23.132 "method": "nvmf_set_crdt", 00:20:23.132 "params": { 00:20:23.132 "crdt1": 0, 00:20:23.132 "crdt2": 0, 00:20:23.132 "crdt3": 0 00:20:23.132 } 00:20:23.132 } 00:20:23.132 ] 00:20:23.132 }, 00:20:23.132 { 00:20:23.132 "subsystem": "iscsi", 00:20:23.132 "config": [ 00:20:23.132 { 00:20:23.132 "method": "iscsi_set_options", 00:20:23.132 "params": { 00:20:23.132 "node_base": "iqn.2016-06.io.spdk", 00:20:23.132 "max_sessions": 128, 00:20:23.132 "max_connections_per_session": 2, 00:20:23.132 "max_queue_depth": 64, 00:20:23.132 "default_time2wait": 2, 00:20:23.132 "default_time2retain": 20, 00:20:23.132 "first_burst_length": 8192, 00:20:23.132 "immediate_data": true, 00:20:23.132 "allow_duplicated_isid": false, 00:20:23.132 "error_recovery_level": 0, 00:20:23.132 "nop_timeout": 60, 00:20:23.132 "nop_in_interval": 30, 00:20:23.132 "disable_chap": false, 00:20:23.132 "require_chap": false, 00:20:23.132 "mutual_chap": false, 00:20:23.132 "chap_group": 0, 00:20:23.132 "max_large_datain_per_connection": 64, 00:20:23.132 "max_r2t_per_connection": 4, 00:20:23.132 "pdu_pool_size": 36864, 00:20:23.132 "immediate_data_pool_size": 16384, 00:20:23.132 "data_out_pool_size": 2048 00:20:23.132 } 00:20:23.132 } 00:20:23.132 ] 00:20:23.132 } 00:20:23.132 ] 00:20:23.132 }' 00:20:23.132 19:38:41 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:23.132 19:38:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:23.132 [2024-12-05 19:38:41.983267] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:20:23.132 [2024-12-05 19:38:41.983383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73804 ] 00:20:23.390 [2024-12-05 19:38:42.144052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.390 [2024-12-05 19:38:42.240549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:24.326 [2024-12-05 19:38:43.005146] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:24.326 [2024-12-05 19:38:43.005947] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:24.326 [2024-12-05 19:38:43.013258] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:20:24.326 [2024-12-05 19:38:43.013329] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:20:24.326 [2024-12-05 19:38:43.013338] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:24.326 [2024-12-05 19:38:43.013345] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:24.326 [2024-12-05 19:38:43.021259] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:24.326 [2024-12-05 19:38:43.021280] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:24.326 [2024-12-05 19:38:43.029156] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:24.326 [2024-12-05 19:38:43.029240] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:24.326 [2024-12-05 19:38:43.046152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:24.326 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:24.326 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:20:24.326 19:38:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:20:24.326 19:38:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 73804 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 73804 ']' 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 73804 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73804 00:20:24.327 killing process with pid 73804 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73804' 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 73804 00:20:24.327 19:38:43 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 73804 00:20:25.340 [2024-12-05 19:38:44.288733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:25.340 [2024-12-05 19:38:44.340637] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:25.340 [2024-12-05 19:38:44.340862] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:25.340 [2024-12-05 19:38:44.345161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:25.340 [2024-12-05 19:38:44.345298] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:25.340 [2024-12-05 19:38:44.345308] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:25.340 [2024-12-05 19:38:44.345337] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:25.599 [2024-12-05 19:38:44.345478] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:26.972 19:38:45 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:20:26.972 00:20:26.972 real 0m7.433s 00:20:26.972 user 0m5.243s 00:20:26.972 sys 0m2.776s 00:20:26.972 19:38:45 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:26.972 19:38:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:20:26.972 ************************************ 00:20:26.972 END TEST test_save_ublk_config 00:20:26.972 ************************************ 00:20:26.972 19:38:45 ublk -- ublk/ublk.sh@139 -- # spdk_pid=73877 00:20:26.972 19:38:45 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.972 19:38:45 ublk -- ublk/ublk.sh@141 -- # waitforlisten 73877 00:20:26.972 19:38:45 ublk -- common/autotest_common.sh@835 -- # '[' -z 73877 ']' 00:20:26.972 19:38:45 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.972 19:38:45 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.972 19:38:45 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.972 19:38:45 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.972 19:38:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:26.972 19:38:45 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:26.972 [2024-12-05 19:38:45.654773] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:20:26.972 [2024-12-05 19:38:45.654895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73877 ] 00:20:26.972 [2024-12-05 19:38:45.808369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:26.972 [2024-12-05 19:38:45.902850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.972 [2024-12-05 19:38:45.902922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:27.539 19:38:46 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:27.539 19:38:46 ublk -- common/autotest_common.sh@868 -- # return 0 00:20:27.539 19:38:46 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:20:27.539 19:38:46 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:27.539 19:38:46 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:27.539 19:38:46 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.539 ************************************ 00:20:27.539 START TEST test_create_ublk 00:20:27.539 ************************************ 00:20:27.539 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:20:27.539 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:20:27.539 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.539 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.539 [2024-12-05 19:38:46.513148] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:27.539 [2024-12-05 19:38:46.514975] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:27.539 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.539 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:20:27.539 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:20:27.539 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.539 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.797 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.797 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:20:27.797 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:27.797 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.797 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.797 [2024-12-05 19:38:46.714277] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:27.797 [2024-12-05 19:38:46.714635] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:27.797 [2024-12-05 19:38:46.714651] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:27.797 [2024-12-05 19:38:46.714658] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:27.797 [2024-12-05 19:38:46.722376] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:27.797 [2024-12-05 19:38:46.722394] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:27.797 [2024-12-05 19:38:46.730157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:27.797 [2024-12-05 19:38:46.730770] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:27.797 [2024-12-05 19:38:46.744220] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:27.797 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.797 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:20:27.797 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:20:27.797 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:20:27.797 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.797 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:27.797 19:38:46 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.797 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:20:27.797 { 00:20:27.797 "ublk_device": "/dev/ublkb0", 00:20:27.797 "id": 0, 00:20:27.797 "queue_depth": 512, 00:20:27.797 "num_queues": 4, 00:20:27.797 "bdev_name": "Malloc0" 00:20:27.797 } 00:20:27.797 ]' 00:20:27.798 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:20:27.798 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:27.798 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:20:28.055 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:20:28.055 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:20:28.055 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:20:28.055 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:20:28.055 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:20:28.055 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:20:28.055 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:28.055 19:38:46 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:20:28.055 19:38:46 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:20:28.055 fio: verification read phase will never start because write phase uses all of runtime 00:20:28.055 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:20:28.055 fio-3.35 00:20:28.055 Starting 1 process 00:20:40.267 00:20:40.267 fio_test: (groupid=0, jobs=1): err= 0: pid=73918: Thu Dec 5 19:38:57 2024 00:20:40.267 write: IOPS=20.5k, BW=80.1MiB/s (84.0MB/s)(801MiB/10001msec); 0 zone resets 00:20:40.267 clat (usec): min=31, max=3968, avg=48.00, stdev=82.30 00:20:40.267 lat (usec): min=32, max=3980, avg=48.44, stdev=82.31 00:20:40.267 clat percentiles (usec): 00:20:40.267 | 1.00th=[ 37], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:20:40.267 | 30.00th=[ 43], 40.00th=[ 44], 50.00th=[ 44], 60.00th=[ 45], 00:20:40.267 | 70.00th=[ 46], 80.00th=[ 48], 90.00th=[ 51], 95.00th=[ 56], 00:20:40.267 | 99.00th=[ 66], 99.50th=[ 72], 99.90th=[ 1418], 99.95th=[ 2474], 00:20:40.267 | 99.99th=[ 3392] 00:20:40.267 bw ( KiB/s): min=75848, max=84392, per=99.96%, avg=81963.37, stdev=2144.28, samples=19 00:20:40.267 iops : min=18962, max=21098, avg=20490.95, stdev=536.00, samples=19 00:20:40.267 lat (usec) : 50=89.14%, 100=10.59%, 250=0.11%, 500=0.03%, 750=0.01% 00:20:40.267 lat (usec) : 1000=0.01% 00:20:40.267 lat (msec) : 2=0.05%, 4=0.07% 00:20:40.267 cpu : usr=3.58%, sys=16.83%, ctx=205011, majf=0, minf=796 00:20:40.267 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:40.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:40.267 issued rwts: total=0,205011,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:40.267 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:40.267 00:20:40.267 Run status group 0 (all jobs): 00:20:40.267 WRITE: bw=80.1MiB/s (84.0MB/s), 80.1MiB/s-80.1MiB/s (84.0MB/s-84.0MB/s), io=801MiB (840MB), run=10001-10001msec 00:20:40.267 00:20:40.267 Disk stats (read/write): 00:20:40.267 ublkb0: ios=0/202844, merge=0/0, ticks=0/7952, in_queue=7953, util=98.90% 00:20:40.267 19:38:57 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:20:40.267 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.267 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.267 [2024-12-05 19:38:57.144565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:40.267 [2024-12-05 19:38:57.194161] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:40.267 [2024-12-05 19:38:57.194762] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:40.268 [2024-12-05 19:38:57.204151] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:40.268 [2024-12-05 19:38:57.204392] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:40.268 [2024-12-05 19:38:57.204405] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:57 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 [2024-12-05 19:38:57.212203] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:20:40.268 request: 00:20:40.268 { 00:20:40.268 "ublk_id": 0, 00:20:40.268 "method": "ublk_stop_disk", 00:20:40.268 "req_id": 1 00:20:40.268 } 00:20:40.268 Got JSON-RPC error response 00:20:40.268 response: 00:20:40.268 { 00:20:40.268 "code": -19, 00:20:40.268 "message": "No such device" 00:20:40.268 } 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:40.268 19:38:57 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 [2024-12-05 19:38:57.228202] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:40.268 [2024-12-05 19:38:57.231803] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:40.268 [2024-12-05 19:38:57.231832] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:57 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:57 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:20:40.268 19:38:57 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:57 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:40.268 19:38:57 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:20:40.268 19:38:57 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:40.268 19:38:57 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:57 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:40.268 19:38:57 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:20:40.268 ************************************ 00:20:40.268 END TEST test_create_ublk 00:20:40.268 ************************************ 00:20:40.268 19:38:57 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:40.268 00:20:40.268 real 0m11.198s 00:20:40.268 user 0m0.664s 00:20:40.268 sys 0m1.757s 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:40.268 19:38:57 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 19:38:57 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:20:40.268 19:38:57 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:40.268 19:38:57 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:40.268 19:38:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 ************************************ 00:20:40.268 START TEST test_create_multi_ublk 00:20:40.268 ************************************ 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 [2024-12-05 19:38:57.743143] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:40.268 [2024-12-05 19:38:57.744687] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 [2024-12-05 19:38:57.947249] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:20:40.268 [2024-12-05 19:38:57.947540] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:20:40.268 [2024-12-05 19:38:57.947547] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:20:40.268 [2024-12-05 19:38:57.947562] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.268 [2024-12-05 19:38:57.959340] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.268 [2024-12-05 19:38:57.959361] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.268 [2024-12-05 19:38:57.971150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.268 [2024-12-05 19:38:57.971641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:20:40.268 [2024-12-05 19:38:58.019150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 [2024-12-05 19:38:58.247252] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:20:40.268 [2024-12-05 19:38:58.247541] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:20:40.268 [2024-12-05 19:38:58.247554] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:40.268 [2024-12-05 19:38:58.247560] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.268 [2024-12-05 19:38:58.259157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.268 [2024-12-05 19:38:58.259174] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.268 [2024-12-05 19:38:58.271155] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.268 [2024-12-05 19:38:58.271638] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:40.268 [2024-12-05 19:38:58.284147] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.268 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.268 [2024-12-05 19:38:58.511267] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:20:40.268 [2024-12-05 19:38:58.511558] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:20:40.269 [2024-12-05 19:38:58.511570] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:20:40.269 [2024-12-05 19:38:58.511576] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.269 [2024-12-05 19:38:58.523157] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.269 [2024-12-05 19:38:58.523177] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.269 [2024-12-05 19:38:58.535149] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.269 [2024-12-05 19:38:58.535641] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:20:40.269 [2024-12-05 19:38:58.548166] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 [2024-12-05 19:38:58.719239] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:20:40.269 [2024-12-05 19:38:58.719522] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:20:40.269 [2024-12-05 19:38:58.719535] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:20:40.269 [2024-12-05 19:38:58.719540] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:20:40.269 [2024-12-05 19:38:58.727158] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:40.269 [2024-12-05 19:38:58.727176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:40.269 [2024-12-05 19:38:58.734158] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:40.269 [2024-12-05 19:38:58.734639] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:20:40.269 [2024-12-05 19:38:58.744165] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:20:40.269 { 00:20:40.269 "ublk_device": "/dev/ublkb0", 00:20:40.269 "id": 0, 00:20:40.269 "queue_depth": 512, 00:20:40.269 "num_queues": 4, 00:20:40.269 "bdev_name": "Malloc0" 00:20:40.269 }, 00:20:40.269 { 00:20:40.269 "ublk_device": "/dev/ublkb1", 00:20:40.269 "id": 1, 00:20:40.269 "queue_depth": 512, 00:20:40.269 "num_queues": 4, 00:20:40.269 "bdev_name": "Malloc1" 00:20:40.269 }, 00:20:40.269 { 00:20:40.269 "ublk_device": "/dev/ublkb2", 00:20:40.269 "id": 2, 00:20:40.269 "queue_depth": 512, 00:20:40.269 "num_queues": 4, 00:20:40.269 "bdev_name": "Malloc2" 00:20:40.269 }, 00:20:40.269 { 00:20:40.269 "ublk_device": "/dev/ublkb3", 00:20:40.269 "id": 3, 00:20:40.269 "queue_depth": 512, 00:20:40.269 "num_queues": 4, 00:20:40.269 "bdev_name": "Malloc3" 00:20:40.269 } 00:20:40.269 ]' 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:20:40.269 19:38:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:20:40.269 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.528 [2024-12-05 19:38:59.375216] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:20:40.528 [2024-12-05 19:38:59.416584] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:40.528 [2024-12-05 19:38:59.417562] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:20:40.528 [2024-12-05 19:38:59.423149] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:40.528 [2024-12-05 19:38:59.423378] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:20:40.528 [2024-12-05 19:38:59.423392] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.528 [2024-12-05 19:38:59.439219] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:40.528 [2024-12-05 19:38:59.479152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:40.528 [2024-12-05 19:38:59.479802] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:40.528 [2024-12-05 19:38:59.487150] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:40.528 [2024-12-05 19:38:59.487372] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:40.528 [2024-12-05 19:38:59.487385] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.528 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.528 [2024-12-05 19:38:59.502215] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:20:40.786 [2024-12-05 19:38:59.543578] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:40.786 [2024-12-05 19:38:59.544545] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:20:40.786 [2024-12-05 19:38:59.551155] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:40.786 [2024-12-05 19:38:59.551372] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:20:40.786 [2024-12-05 19:38:59.551385] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:20:40.786 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.786 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:40.786 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:20:40.786 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.786 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:40.786 [2024-12-05 19:38:59.566217] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:20:40.786 [2024-12-05 19:38:59.597579] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:40.786 [2024-12-05 19:38:59.598485] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:20:40.786 [2024-12-05 19:38:59.607152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:40.786 [2024-12-05 19:38:59.607366] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:20:40.786 [2024-12-05 19:38:59.607378] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:20:40.786 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.786 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:20:41.044 [2024-12-05 19:38:59.799197] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:41.044 [2024-12-05 19:38:59.802839] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:41.044 [2024-12-05 19:38:59.802866] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:41.044 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:20:41.044 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.044 19:38:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:41.044 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.044 19:38:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.303 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.303 19:39:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.303 19:39:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:41.303 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.303 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.560 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.560 19:39:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.560 19:39:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:41.560 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.560 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:41.818 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.818 19:39:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:20:41.818 19:39:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:20:41.818 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.818 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:20:42.075 ************************************ 00:20:42.075 END TEST test_create_multi_ublk 00:20:42.075 ************************************ 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:20:42.075 00:20:42.075 real 0m3.267s 00:20:42.075 user 0m0.785s 00:20:42.075 sys 0m0.137s 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.075 19:39:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:20:42.075 19:39:01 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:42.075 19:39:01 ublk -- ublk/ublk.sh@147 -- # cleanup 00:20:42.075 19:39:01 ublk -- ublk/ublk.sh@130 -- # killprocess 73877 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@954 -- # '[' -z 73877 ']' 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@958 -- # kill -0 73877 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@959 -- # uname 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73877 00:20:42.075 killing process with pid 73877 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73877' 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@973 -- # kill 73877 00:20:42.075 19:39:01 ublk -- common/autotest_common.sh@978 -- # wait 73877 00:20:42.640 [2024-12-05 19:39:01.575284] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:42.640 [2024-12-05 19:39:01.575329] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:43.577 00:20:43.577 real 0m24.274s 00:20:43.577 user 0m34.927s 00:20:43.577 sys 0m9.637s 00:20:43.577 ************************************ 00:20:43.577 END TEST ublk 00:20:43.577 ************************************ 00:20:43.577 19:39:02 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.577 19:39:02 ublk -- common/autotest_common.sh@10 -- # set +x 00:20:43.577 19:39:02 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:43.577 19:39:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:43.577 19:39:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.577 19:39:02 -- common/autotest_common.sh@10 -- # set +x 00:20:43.577 ************************************ 00:20:43.577 START TEST ublk_recovery 00:20:43.577 ************************************ 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:20:43.577 * Looking for test storage... 00:20:43.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.577 19:39:02 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.577 --rc genhtml_branch_coverage=1 00:20:43.577 --rc genhtml_function_coverage=1 00:20:43.577 --rc genhtml_legend=1 00:20:43.577 --rc geninfo_all_blocks=1 00:20:43.577 --rc geninfo_unexecuted_blocks=1 00:20:43.577 00:20:43.577 ' 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.577 --rc genhtml_branch_coverage=1 00:20:43.577 --rc genhtml_function_coverage=1 00:20:43.577 --rc genhtml_legend=1 00:20:43.577 --rc geninfo_all_blocks=1 00:20:43.577 --rc geninfo_unexecuted_blocks=1 00:20:43.577 00:20:43.577 ' 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.577 --rc genhtml_branch_coverage=1 00:20:43.577 --rc genhtml_function_coverage=1 00:20:43.577 --rc genhtml_legend=1 00:20:43.577 --rc geninfo_all_blocks=1 00:20:43.577 --rc geninfo_unexecuted_blocks=1 00:20:43.577 00:20:43.577 ' 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:43.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.577 --rc genhtml_branch_coverage=1 00:20:43.577 --rc genhtml_function_coverage=1 00:20:43.577 --rc genhtml_legend=1 00:20:43.577 --rc geninfo_all_blocks=1 00:20:43.577 --rc geninfo_unexecuted_blocks=1 00:20:43.577 00:20:43.577 ' 00:20:43.577 19:39:02 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:20:43.577 19:39:02 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:20:43.577 19:39:02 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:20:43.577 19:39:02 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:20:43.577 19:39:02 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:20:43.577 19:39:02 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:20:43.577 19:39:02 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:20:43.577 19:39:02 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:20:43.577 19:39:02 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:20:43.577 19:39:02 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:20:43.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.577 19:39:02 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=74269 00:20:43.577 19:39:02 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:43.577 19:39:02 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 74269 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74269 ']' 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.577 19:39:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:43.577 19:39:02 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:43.577 [2024-12-05 19:39:02.484920] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:20:43.577 [2024-12-05 19:39:02.485044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74269 ] 00:20:43.836 [2024-12-05 19:39:02.647046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:43.836 [2024-12-05 19:39:02.745651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.836 [2024-12-05 19:39:02.745726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.403 19:39:03 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.403 19:39:03 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:44.403 19:39:03 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:20:44.403 19:39:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.403 19:39:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.403 [2024-12-05 19:39:03.336149] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:44.403 [2024-12-05 19:39:03.338001] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:44.403 19:39:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.403 19:39:03 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:44.403 19:39:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.403 19:39:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.662 malloc0 00:20:44.662 19:39:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.662 19:39:03 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:20:44.662 19:39:03 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.662 19:39:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:44.663 [2024-12-05 19:39:03.448278] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:20:44.663 [2024-12-05 19:39:03.448372] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:20:44.663 [2024-12-05 19:39:03.448384] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:44.663 [2024-12-05 19:39:03.448390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:20:44.663 [2024-12-05 19:39:03.456280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:20:44.663 [2024-12-05 19:39:03.456303] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:20:44.663 [2024-12-05 19:39:03.464159] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:20:44.663 [2024-12-05 19:39:03.464295] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:20:44.663 [2024-12-05 19:39:03.481162] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:20:44.663 1 00:20:44.663 19:39:03 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.663 19:39:03 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:20:45.597 19:39:04 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=74304 00:20:45.597 19:39:04 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:20:45.597 19:39:04 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:20:45.597 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:45.597 fio-3.35 00:20:45.597 Starting 1 process 00:20:50.860 19:39:09 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 74269 00:20:50.860 19:39:09 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:56.203 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 74269 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:56.203 19:39:14 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=74417 00:20:56.203 19:39:14 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:56.203 19:39:14 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:56.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.203 19:39:14 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 74417 00:20:56.204 19:39:14 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 74417 ']' 00:20:56.204 19:39:14 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.204 19:39:14 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.204 19:39:14 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.204 19:39:14 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.204 19:39:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.204 [2024-12-05 19:39:14.578147] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:20:56.204 [2024-12-05 19:39:14.578814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74417 ] 00:20:56.204 [2024-12-05 19:39:14.737048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:56.204 [2024-12-05 19:39:14.841959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.204 [2024-12-05 19:39:14.842064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.460 19:39:15 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.460 19:39:15 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:56.460 19:39:15 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:56.460 19:39:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.461 19:39:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.461 [2024-12-05 19:39:15.437148] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:56.461 [2024-12-05 19:39:15.438982] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:56.461 19:39:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.461 19:39:15 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:56.461 19:39:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.461 19:39:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.718 malloc0 00:20:56.718 19:39:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.718 19:39:15 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:56.718 19:39:15 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.718 19:39:15 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:56.718 [2024-12-05 19:39:15.541275] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:56.718 [2024-12-05 19:39:15.541316] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:56.718 [2024-12-05 19:39:15.541326] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:56.718 [2024-12-05 19:39:15.549178] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:56.718 [2024-12-05 19:39:15.549202] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:20:56.718 [2024-12-05 19:39:15.549211] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:56.718 [2024-12-05 19:39:15.549286] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:56.718 1 00:20:56.718 19:39:15 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.718 19:39:15 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 74304 00:20:56.718 [2024-12-05 19:39:15.557149] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:56.718 [2024-12-05 19:39:15.560191] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:56.718 [2024-12-05 19:39:15.564155] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:56.718 [2024-12-05 19:39:15.564175] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:21:52.999 00:21:52.999 fio_test: (groupid=0, jobs=1): err= 0: pid=74307: Thu Dec 5 19:40:04 2024 00:21:52.999 read: IOPS=28.6k, BW=112MiB/s (117MB/s)(6693MiB/60002msec) 00:21:52.999 slat (nsec): min=1036, max=678265, avg=4756.28, stdev=1550.40 00:21:52.999 clat (usec): min=510, max=6074.7k, avg=2174.82, stdev=35019.53 00:21:52.999 lat (usec): min=514, max=6074.8k, avg=2179.57, stdev=35019.54 00:21:52.999 clat percentiles (usec): 00:21:52.999 | 1.00th=[ 1647], 5.00th=[ 1762], 10.00th=[ 1778], 20.00th=[ 1811], 00:21:52.999 | 30.00th=[ 1827], 40.00th=[ 1844], 50.00th=[ 1844], 60.00th=[ 1860], 00:21:52.999 | 70.00th=[ 1876], 80.00th=[ 1893], 90.00th=[ 1942], 95.00th=[ 2868], 00:21:52.999 | 99.00th=[ 4817], 99.50th=[ 5473], 99.90th=[ 7046], 99.95th=[ 7963], 00:21:52.999 | 99.99th=[12780] 00:21:52.999 bw ( KiB/s): min=13328, max=131752, per=100.00%, avg=125856.25, stdev=16028.89, samples=108 00:21:52.999 iops : min= 3332, max=32938, avg=31464.06, stdev=4007.23, samples=108 00:21:52.999 write: IOPS=28.5k, BW=111MiB/s (117MB/s)(6688MiB/60002msec); 0 zone resets 00:21:52.999 slat (nsec): min=1058, max=191760, avg=4782.74, stdev=1370.81 00:21:52.999 clat (usec): min=507, max=6074.9k, avg=2298.96, stdev=39100.66 00:21:52.999 lat (usec): min=511, max=6074.9k, avg=2303.74, stdev=39100.66 00:21:52.999 clat percentiles (usec): 00:21:52.999 | 1.00th=[ 1696], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1893], 00:21:52.999 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1958], 00:21:52.999 | 70.00th=[ 1958], 80.00th=[ 1991], 90.00th=[ 2024], 95.00th=[ 2802], 00:21:52.999 | 99.00th=[ 4817], 99.50th=[ 5604], 99.90th=[ 7111], 99.95th=[ 7963], 00:21:52.999 | 99.99th=[12911] 00:21:52.999 bw ( KiB/s): min=13096, max=131856, per=100.00%, avg=125753.58, stdev=16156.57, samples=108 00:21:52.999 iops : min= 3274, max=32964, avg=31438.37, stdev=4039.16, samples=108 00:21:52.999 lat (usec) : 750=0.01%, 1000=0.01% 00:21:52.999 lat (msec) : 2=88.89%, 4=8.60%, 10=2.47%, 20=0.02%, >=2000=0.01% 00:21:52.999 cpu : usr=6.05%, sys=27.65%, ctx=113881, majf=0, minf=14 00:21:52.999 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:21:52.999 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.999 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:52.999 issued rwts: total=1713432,1712009,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.999 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:52.999 00:21:52.999 Run status group 0 (all jobs): 00:21:52.999 READ: bw=112MiB/s (117MB/s), 112MiB/s-112MiB/s (117MB/s-117MB/s), io=6693MiB (7018MB), run=60002-60002msec 00:21:52.999 WRITE: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=6688MiB (7012MB), run=60002-60002msec 00:21:52.999 00:21:52.999 Disk stats (read/write): 00:21:52.999 ublkb1: ios=1710731/1709415, merge=0/0, ticks=3639660/3714081, in_queue=7353741, util=99.92% 00:21:52.999 19:40:04 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.999 [2024-12-05 19:40:04.743622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:21:52.999 [2024-12-05 19:40:04.789246] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:21:52.999 [2024-12-05 19:40:04.789366] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:21:52.999 [2024-12-05 19:40:04.797168] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:21:52.999 [2024-12-05 19:40:04.797241] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:21:52.999 [2024-12-05 19:40:04.797249] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.999 19:40:04 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.999 [2024-12-05 19:40:04.813211] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:52.999 [2024-12-05 19:40:04.816938] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:52.999 [2024-12-05 19:40:04.816966] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.999 19:40:04 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:21:52.999 19:40:04 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:21:52.999 19:40:04 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 74417 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 74417 ']' 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 74417 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74417 00:21:52.999 killing process with pid 74417 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74417' 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@973 -- # kill 74417 00:21:52.999 19:40:04 ublk_recovery -- common/autotest_common.sh@978 -- # wait 74417 00:21:52.999 [2024-12-05 19:40:05.868468] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:21:52.999 [2024-12-05 19:40:05.868508] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:21:52.999 ************************************ 00:21:52.999 END TEST ublk_recovery 00:21:52.999 ************************************ 00:21:52.999 00:21:52.999 real 1m4.283s 00:21:52.999 user 1m47.017s 00:21:52.999 sys 0m31.062s 00:21:52.999 19:40:06 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:52.999 19:40:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.999 19:40:06 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:21:53.000 19:40:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:21:53.000 19:40:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.000 19:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:53.000 19:40:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:21:53.000 19:40:06 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:53.000 19:40:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:53.000 19:40:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.000 19:40:06 -- common/autotest_common.sh@10 -- # set +x 00:21:53.000 ************************************ 00:21:53.000 START TEST ftl 00:21:53.000 ************************************ 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:53.000 * Looking for test storage... 00:21:53.000 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:53.000 19:40:06 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.000 19:40:06 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.000 19:40:06 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.000 19:40:06 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.000 19:40:06 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.000 19:40:06 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.000 19:40:06 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.000 19:40:06 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.000 19:40:06 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.000 19:40:06 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.000 19:40:06 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.000 19:40:06 ftl -- scripts/common.sh@344 -- # case "$op" in 00:21:53.000 19:40:06 ftl -- scripts/common.sh@345 -- # : 1 00:21:53.000 19:40:06 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.000 19:40:06 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.000 19:40:06 ftl -- scripts/common.sh@365 -- # decimal 1 00:21:53.000 19:40:06 ftl -- scripts/common.sh@353 -- # local d=1 00:21:53.000 19:40:06 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.000 19:40:06 ftl -- scripts/common.sh@355 -- # echo 1 00:21:53.000 19:40:06 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.000 19:40:06 ftl -- scripts/common.sh@366 -- # decimal 2 00:21:53.000 19:40:06 ftl -- scripts/common.sh@353 -- # local d=2 00:21:53.000 19:40:06 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.000 19:40:06 ftl -- scripts/common.sh@355 -- # echo 2 00:21:53.000 19:40:06 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.000 19:40:06 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.000 19:40:06 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.000 19:40:06 ftl -- scripts/common.sh@368 -- # return 0 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.000 --rc genhtml_branch_coverage=1 00:21:53.000 --rc genhtml_function_coverage=1 00:21:53.000 --rc genhtml_legend=1 00:21:53.000 --rc geninfo_all_blocks=1 00:21:53.000 --rc geninfo_unexecuted_blocks=1 00:21:53.000 00:21:53.000 ' 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.000 --rc genhtml_branch_coverage=1 00:21:53.000 --rc genhtml_function_coverage=1 00:21:53.000 --rc genhtml_legend=1 00:21:53.000 --rc geninfo_all_blocks=1 00:21:53.000 --rc geninfo_unexecuted_blocks=1 00:21:53.000 00:21:53.000 ' 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.000 --rc genhtml_branch_coverage=1 00:21:53.000 --rc genhtml_function_coverage=1 00:21:53.000 --rc genhtml_legend=1 00:21:53.000 --rc geninfo_all_blocks=1 00:21:53.000 --rc geninfo_unexecuted_blocks=1 00:21:53.000 00:21:53.000 ' 00:21:53.000 19:40:06 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.000 --rc genhtml_branch_coverage=1 00:21:53.000 --rc genhtml_function_coverage=1 00:21:53.000 --rc genhtml_legend=1 00:21:53.000 --rc geninfo_all_blocks=1 00:21:53.000 --rc geninfo_unexecuted_blocks=1 00:21:53.000 00:21:53.000 ' 00:21:53.000 19:40:06 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:53.000 19:40:06 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:21:53.000 19:40:06 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.000 19:40:06 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.000 19:40:06 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:53.000 19:40:06 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:53.000 19:40:06 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.000 19:40:06 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:53.000 19:40:06 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:53.000 19:40:06 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.000 19:40:06 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.000 19:40:06 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:53.000 19:40:06 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:53.000 19:40:06 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:53.000 19:40:06 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:53.000 19:40:06 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:53.000 19:40:06 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:53.000 19:40:06 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.000 19:40:06 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.000 19:40:06 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:53.000 19:40:06 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:53.000 19:40:06 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:53.000 19:40:06 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:53.000 19:40:06 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:53.000 19:40:06 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:53.000 19:40:06 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:53.001 19:40:06 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:53.001 19:40:06 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.001 19:40:06 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.001 19:40:06 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.001 19:40:06 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:21:53.001 19:40:06 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:21:53.001 19:40:06 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:21:53.001 19:40:06 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:21:53.001 19:40:06 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:53.001 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:53.001 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:53.001 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:53.001 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:53.001 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:53.001 19:40:07 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=75218 00:21:53.001 19:40:07 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:21:53.001 19:40:07 ftl -- ftl/ftl.sh@38 -- # waitforlisten 75218 00:21:53.001 19:40:07 ftl -- common/autotest_common.sh@835 -- # '[' -z 75218 ']' 00:21:53.001 19:40:07 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.001 19:40:07 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.001 19:40:07 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.001 19:40:07 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.001 19:40:07 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:53.001 [2024-12-05 19:40:07.290468] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:21:53.001 [2024-12-05 19:40:07.290691] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75218 ] 00:21:53.001 [2024-12-05 19:40:07.446267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.001 [2024-12-05 19:40:07.543034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.001 19:40:08 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.001 19:40:08 ftl -- common/autotest_common.sh@868 -- # return 0 00:21:53.001 19:40:08 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:21:53.001 19:40:08 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@50 -- # break 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@63 -- # break 00:21:53.001 19:40:09 ftl -- ftl/ftl.sh@66 -- # killprocess 75218 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@954 -- # '[' -z 75218 ']' 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@958 -- # kill -0 75218 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@959 -- # uname 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75218 00:21:53.001 killing process with pid 75218 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75218' 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@973 -- # kill 75218 00:21:53.001 19:40:09 ftl -- common/autotest_common.sh@978 -- # wait 75218 00:21:53.001 19:40:11 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:53.001 19:40:11 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:53.001 19:40:11 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:53.001 19:40:11 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.001 19:40:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:53.001 ************************************ 00:21:53.001 START TEST ftl_fio_basic 00:21:53.001 ************************************ 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:53.001 * Looking for test storage... 00:21:53.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:53.001 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:53.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.002 --rc genhtml_branch_coverage=1 00:21:53.002 --rc genhtml_function_coverage=1 00:21:53.002 --rc genhtml_legend=1 00:21:53.002 --rc geninfo_all_blocks=1 00:21:53.002 --rc geninfo_unexecuted_blocks=1 00:21:53.002 00:21:53.002 ' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:53.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.002 --rc genhtml_branch_coverage=1 00:21:53.002 --rc genhtml_function_coverage=1 00:21:53.002 --rc genhtml_legend=1 00:21:53.002 --rc geninfo_all_blocks=1 00:21:53.002 --rc geninfo_unexecuted_blocks=1 00:21:53.002 00:21:53.002 ' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:53.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.002 --rc genhtml_branch_coverage=1 00:21:53.002 --rc genhtml_function_coverage=1 00:21:53.002 --rc genhtml_legend=1 00:21:53.002 --rc geninfo_all_blocks=1 00:21:53.002 --rc geninfo_unexecuted_blocks=1 00:21:53.002 00:21:53.002 ' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:53.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:53.002 --rc genhtml_branch_coverage=1 00:21:53.002 --rc genhtml_function_coverage=1 00:21:53.002 --rc genhtml_legend=1 00:21:53.002 --rc geninfo_all_blocks=1 00:21:53.002 --rc geninfo_unexecuted_blocks=1 00:21:53.002 00:21:53.002 ' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=75350 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 75350 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 75350 ']' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:53.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.002 19:40:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:53.002 [2024-12-05 19:40:11.467049] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:21:53.002 [2024-12-05 19:40:11.467339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75350 ] 00:21:53.002 [2024-12-05 19:40:11.624042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:53.002 [2024-12-05 19:40:11.704083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.002 [2024-12-05 19:40:11.704278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:53.002 [2024-12-05 19:40:11.704359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:53.574 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:53.836 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:53.836 { 00:21:53.836 "name": "nvme0n1", 00:21:53.836 "aliases": [ 00:21:53.836 "8dbf4d35-7f8d-4ca4-8867-c79b443cfbe7" 00:21:53.836 ], 00:21:53.836 "product_name": "NVMe disk", 00:21:53.836 "block_size": 4096, 00:21:53.836 "num_blocks": 1310720, 00:21:53.836 "uuid": "8dbf4d35-7f8d-4ca4-8867-c79b443cfbe7", 00:21:53.836 "numa_id": -1, 00:21:53.836 "assigned_rate_limits": { 00:21:53.836 "rw_ios_per_sec": 0, 00:21:53.836 "rw_mbytes_per_sec": 0, 00:21:53.836 "r_mbytes_per_sec": 0, 00:21:53.836 "w_mbytes_per_sec": 0 00:21:53.836 }, 00:21:53.836 "claimed": false, 00:21:53.836 "zoned": false, 00:21:53.836 "supported_io_types": { 00:21:53.836 "read": true, 00:21:53.836 "write": true, 00:21:53.836 "unmap": true, 00:21:53.836 "flush": true, 00:21:53.836 "reset": true, 00:21:53.836 "nvme_admin": true, 00:21:53.836 "nvme_io": true, 00:21:53.836 "nvme_io_md": false, 00:21:53.837 "write_zeroes": true, 00:21:53.837 "zcopy": false, 00:21:53.837 "get_zone_info": false, 00:21:53.837 "zone_management": false, 00:21:53.837 "zone_append": false, 00:21:53.837 "compare": true, 00:21:53.837 "compare_and_write": false, 00:21:53.837 "abort": true, 00:21:53.837 "seek_hole": false, 00:21:53.837 "seek_data": false, 00:21:53.837 "copy": true, 00:21:53.837 "nvme_iov_md": false 00:21:53.837 }, 00:21:53.837 "driver_specific": { 00:21:53.837 "nvme": [ 00:21:53.837 { 00:21:53.837 "pci_address": "0000:00:11.0", 00:21:53.837 "trid": { 00:21:53.837 "trtype": "PCIe", 00:21:53.837 "traddr": "0000:00:11.0" 00:21:53.837 }, 00:21:53.837 "ctrlr_data": { 00:21:53.837 "cntlid": 0, 00:21:53.837 "vendor_id": "0x1b36", 00:21:53.837 "model_number": "QEMU NVMe Ctrl", 00:21:53.837 "serial_number": "12341", 00:21:53.837 "firmware_revision": "8.0.0", 00:21:53.837 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:53.837 "oacs": { 00:21:53.837 "security": 0, 00:21:53.837 "format": 1, 00:21:53.837 "firmware": 0, 00:21:53.837 "ns_manage": 1 00:21:53.837 }, 00:21:53.837 "multi_ctrlr": false, 00:21:53.837 "ana_reporting": false 00:21:53.837 }, 00:21:53.837 "vs": { 00:21:53.837 "nvme_version": "1.4" 00:21:53.837 }, 00:21:53.837 "ns_data": { 00:21:53.837 "id": 1, 00:21:53.837 "can_share": false 00:21:53.837 } 00:21:53.837 } 00:21:53.837 ], 00:21:53.837 "mp_policy": "active_passive" 00:21:53.837 } 00:21:53.837 } 00:21:53.837 ]' 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:53.837 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:54.098 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:21:54.098 19:40:12 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:54.359 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=1b769824-a594-49b2-97d1-d97750493f89 00:21:54.359 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 1b769824-a594-49b2-97d1-d97750493f89 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:54.639 { 00:21:54.639 "name": "a1f706c4-24bd-4388-9fe3-b4934a6aaeeb", 00:21:54.639 "aliases": [ 00:21:54.639 "lvs/nvme0n1p0" 00:21:54.639 ], 00:21:54.639 "product_name": "Logical Volume", 00:21:54.639 "block_size": 4096, 00:21:54.639 "num_blocks": 26476544, 00:21:54.639 "uuid": "a1f706c4-24bd-4388-9fe3-b4934a6aaeeb", 00:21:54.639 "assigned_rate_limits": { 00:21:54.639 "rw_ios_per_sec": 0, 00:21:54.639 "rw_mbytes_per_sec": 0, 00:21:54.639 "r_mbytes_per_sec": 0, 00:21:54.639 "w_mbytes_per_sec": 0 00:21:54.639 }, 00:21:54.639 "claimed": false, 00:21:54.639 "zoned": false, 00:21:54.639 "supported_io_types": { 00:21:54.639 "read": true, 00:21:54.639 "write": true, 00:21:54.639 "unmap": true, 00:21:54.639 "flush": false, 00:21:54.639 "reset": true, 00:21:54.639 "nvme_admin": false, 00:21:54.639 "nvme_io": false, 00:21:54.639 "nvme_io_md": false, 00:21:54.639 "write_zeroes": true, 00:21:54.639 "zcopy": false, 00:21:54.639 "get_zone_info": false, 00:21:54.639 "zone_management": false, 00:21:54.639 "zone_append": false, 00:21:54.639 "compare": false, 00:21:54.639 "compare_and_write": false, 00:21:54.639 "abort": false, 00:21:54.639 "seek_hole": true, 00:21:54.639 "seek_data": true, 00:21:54.639 "copy": false, 00:21:54.639 "nvme_iov_md": false 00:21:54.639 }, 00:21:54.639 "driver_specific": { 00:21:54.639 "lvol": { 00:21:54.639 "lvol_store_uuid": "1b769824-a594-49b2-97d1-d97750493f89", 00:21:54.639 "base_bdev": "nvme0n1", 00:21:54.639 "thin_provision": true, 00:21:54.639 "num_allocated_clusters": 0, 00:21:54.639 "snapshot": false, 00:21:54.639 "clone": false, 00:21:54.639 "esnap_clone": false 00:21:54.639 } 00:21:54.639 } 00:21:54.639 } 00:21:54.639 ]' 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:54.639 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:54.902 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:54.902 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:54.902 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:54.902 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:21:54.902 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:21:54.902 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:55.164 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:55.164 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:55.164 19:40:13 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:55.164 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:55.164 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:55.164 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:55.164 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:55.164 19:40:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:55.164 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:55.164 { 00:21:55.164 "name": "a1f706c4-24bd-4388-9fe3-b4934a6aaeeb", 00:21:55.164 "aliases": [ 00:21:55.164 "lvs/nvme0n1p0" 00:21:55.164 ], 00:21:55.164 "product_name": "Logical Volume", 00:21:55.164 "block_size": 4096, 00:21:55.164 "num_blocks": 26476544, 00:21:55.164 "uuid": "a1f706c4-24bd-4388-9fe3-b4934a6aaeeb", 00:21:55.164 "assigned_rate_limits": { 00:21:55.164 "rw_ios_per_sec": 0, 00:21:55.164 "rw_mbytes_per_sec": 0, 00:21:55.164 "r_mbytes_per_sec": 0, 00:21:55.164 "w_mbytes_per_sec": 0 00:21:55.164 }, 00:21:55.164 "claimed": false, 00:21:55.164 "zoned": false, 00:21:55.164 "supported_io_types": { 00:21:55.164 "read": true, 00:21:55.164 "write": true, 00:21:55.164 "unmap": true, 00:21:55.164 "flush": false, 00:21:55.164 "reset": true, 00:21:55.164 "nvme_admin": false, 00:21:55.164 "nvme_io": false, 00:21:55.164 "nvme_io_md": false, 00:21:55.164 "write_zeroes": true, 00:21:55.164 "zcopy": false, 00:21:55.164 "get_zone_info": false, 00:21:55.164 "zone_management": false, 00:21:55.164 "zone_append": false, 00:21:55.164 "compare": false, 00:21:55.164 "compare_and_write": false, 00:21:55.164 "abort": false, 00:21:55.164 "seek_hole": true, 00:21:55.164 "seek_data": true, 00:21:55.164 "copy": false, 00:21:55.164 "nvme_iov_md": false 00:21:55.164 }, 00:21:55.164 "driver_specific": { 00:21:55.164 "lvol": { 00:21:55.164 "lvol_store_uuid": "1b769824-a594-49b2-97d1-d97750493f89", 00:21:55.164 "base_bdev": "nvme0n1", 00:21:55.164 "thin_provision": true, 00:21:55.164 "num_allocated_clusters": 0, 00:21:55.164 "snapshot": false, 00:21:55.164 "clone": false, 00:21:55.164 "esnap_clone": false 00:21:55.164 } 00:21:55.164 } 00:21:55.164 } 00:21:55.164 ]' 00:21:55.164 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:55.164 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:55.164 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:21:55.426 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:55.426 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a1f706c4-24bd-4388-9fe3-b4934a6aaeeb 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:55.688 { 00:21:55.688 "name": "a1f706c4-24bd-4388-9fe3-b4934a6aaeeb", 00:21:55.688 "aliases": [ 00:21:55.688 "lvs/nvme0n1p0" 00:21:55.688 ], 00:21:55.688 "product_name": "Logical Volume", 00:21:55.688 "block_size": 4096, 00:21:55.688 "num_blocks": 26476544, 00:21:55.688 "uuid": "a1f706c4-24bd-4388-9fe3-b4934a6aaeeb", 00:21:55.688 "assigned_rate_limits": { 00:21:55.688 "rw_ios_per_sec": 0, 00:21:55.688 "rw_mbytes_per_sec": 0, 00:21:55.688 "r_mbytes_per_sec": 0, 00:21:55.688 "w_mbytes_per_sec": 0 00:21:55.688 }, 00:21:55.688 "claimed": false, 00:21:55.688 "zoned": false, 00:21:55.688 "supported_io_types": { 00:21:55.688 "read": true, 00:21:55.688 "write": true, 00:21:55.688 "unmap": true, 00:21:55.688 "flush": false, 00:21:55.688 "reset": true, 00:21:55.688 "nvme_admin": false, 00:21:55.688 "nvme_io": false, 00:21:55.688 "nvme_io_md": false, 00:21:55.688 "write_zeroes": true, 00:21:55.688 "zcopy": false, 00:21:55.688 "get_zone_info": false, 00:21:55.688 "zone_management": false, 00:21:55.688 "zone_append": false, 00:21:55.688 "compare": false, 00:21:55.688 "compare_and_write": false, 00:21:55.688 "abort": false, 00:21:55.688 "seek_hole": true, 00:21:55.688 "seek_data": true, 00:21:55.688 "copy": false, 00:21:55.688 "nvme_iov_md": false 00:21:55.688 }, 00:21:55.688 "driver_specific": { 00:21:55.688 "lvol": { 00:21:55.688 "lvol_store_uuid": "1b769824-a594-49b2-97d1-d97750493f89", 00:21:55.688 "base_bdev": "nvme0n1", 00:21:55.688 "thin_provision": true, 00:21:55.688 "num_allocated_clusters": 0, 00:21:55.688 "snapshot": false, 00:21:55.688 "clone": false, 00:21:55.688 "esnap_clone": false 00:21:55.688 } 00:21:55.688 } 00:21:55.688 } 00:21:55.688 ]' 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:21:55.688 19:40:14 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a1f706c4-24bd-4388-9fe3-b4934a6aaeeb -c nvc0n1p0 --l2p_dram_limit 60 00:21:55.950 [2024-12-05 19:40:14.839205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.950 [2024-12-05 19:40:14.839318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:55.950 [2024-12-05 19:40:14.839337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:55.950 [2024-12-05 19:40:14.839344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.950 [2024-12-05 19:40:14.839396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.950 [2024-12-05 19:40:14.839405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.950 [2024-12-05 19:40:14.839415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:55.950 [2024-12-05 19:40:14.839421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.839451] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:55.951 [2024-12-05 19:40:14.840087] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:55.951 [2024-12-05 19:40:14.840101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.951 [2024-12-05 19:40:14.840107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.951 [2024-12-05 19:40:14.840115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:21:55.951 [2024-12-05 19:40:14.840123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.840194] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 213c74dd-8313-4d53-aa0d-48a9cc5e2d6d 00:21:55.951 [2024-12-05 19:40:14.841199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.951 [2024-12-05 19:40:14.841217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:55.951 [2024-12-05 19:40:14.841225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:55.951 [2024-12-05 19:40:14.841232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.845914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.951 [2024-12-05 19:40:14.845944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.951 [2024-12-05 19:40:14.845952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.638 ms 00:21:55.951 [2024-12-05 19:40:14.845959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.846040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.951 [2024-12-05 19:40:14.846049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.951 [2024-12-05 19:40:14.846055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:55.951 [2024-12-05 19:40:14.846065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.846101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.951 [2024-12-05 19:40:14.846110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:55.951 [2024-12-05 19:40:14.846116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:55.951 [2024-12-05 19:40:14.846123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.846151] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:55.951 [2024-12-05 19:40:14.848957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.951 [2024-12-05 19:40:14.848983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.951 [2024-12-05 19:40:14.848993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.809 ms 00:21:55.951 [2024-12-05 19:40:14.849001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.849035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.951 [2024-12-05 19:40:14.849041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:55.951 [2024-12-05 19:40:14.849049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:55.951 [2024-12-05 19:40:14.849055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.849080] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:55.951 [2024-12-05 19:40:14.849205] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:55.951 [2024-12-05 19:40:14.849218] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:55.951 [2024-12-05 19:40:14.849226] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:55.951 [2024-12-05 19:40:14.849236] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849243] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849251] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:55.951 [2024-12-05 19:40:14.849257] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:55.951 [2024-12-05 19:40:14.849264] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:55.951 [2024-12-05 19:40:14.849269] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:55.951 [2024-12-05 19:40:14.849277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.951 [2024-12-05 19:40:14.849284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:55.951 [2024-12-05 19:40:14.849291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:21:55.951 [2024-12-05 19:40:14.849297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.849370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.951 [2024-12-05 19:40:14.849376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:55.951 [2024-12-05 19:40:14.849384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:55.951 [2024-12-05 19:40:14.849389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.951 [2024-12-05 19:40:14.849478] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:55.951 [2024-12-05 19:40:14.849489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:55.951 [2024-12-05 19:40:14.849499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849505] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:55.951 [2024-12-05 19:40:14.849518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:55.951 [2024-12-05 19:40:14.849537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.951 [2024-12-05 19:40:14.849549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:55.951 [2024-12-05 19:40:14.849554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:55.951 [2024-12-05 19:40:14.849561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.951 [2024-12-05 19:40:14.849566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:55.951 [2024-12-05 19:40:14.849573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:55.951 [2024-12-05 19:40:14.849578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:55.951 [2024-12-05 19:40:14.849594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:55.951 [2024-12-05 19:40:14.849611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:55.951 [2024-12-05 19:40:14.849629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:55.951 [2024-12-05 19:40:14.849646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:55.951 [2024-12-05 19:40:14.849662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:55.951 [2024-12-05 19:40:14.849681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.951 [2024-12-05 19:40:14.849703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:55.951 [2024-12-05 19:40:14.849708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:55.951 [2024-12-05 19:40:14.849715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.951 [2024-12-05 19:40:14.849720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:55.951 [2024-12-05 19:40:14.849726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:55.951 [2024-12-05 19:40:14.849731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:55.951 [2024-12-05 19:40:14.849742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:55.951 [2024-12-05 19:40:14.849748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849753] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:55.951 [2024-12-05 19:40:14.849760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:55.951 [2024-12-05 19:40:14.849766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.951 [2024-12-05 19:40:14.849779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:55.951 [2024-12-05 19:40:14.849788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:55.951 [2024-12-05 19:40:14.849793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:55.951 [2024-12-05 19:40:14.849799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:55.951 [2024-12-05 19:40:14.849804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:55.951 [2024-12-05 19:40:14.849811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:55.951 [2024-12-05 19:40:14.849817] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:55.952 [2024-12-05 19:40:14.849826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.952 [2024-12-05 19:40:14.849832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:55.952 [2024-12-05 19:40:14.849840] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:55.952 [2024-12-05 19:40:14.849845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:55.952 [2024-12-05 19:40:14.849852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:55.952 [2024-12-05 19:40:14.849858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:55.952 [2024-12-05 19:40:14.849865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:55.952 [2024-12-05 19:40:14.849871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:55.952 [2024-12-05 19:40:14.849878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:55.952 [2024-12-05 19:40:14.849884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:55.952 [2024-12-05 19:40:14.849892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:55.952 [2024-12-05 19:40:14.849897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:55.952 [2024-12-05 19:40:14.849904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:55.952 [2024-12-05 19:40:14.849910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:55.952 [2024-12-05 19:40:14.849917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:55.952 [2024-12-05 19:40:14.849923] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:55.952 [2024-12-05 19:40:14.849930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.952 [2024-12-05 19:40:14.849938] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:55.952 [2024-12-05 19:40:14.849945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:55.952 [2024-12-05 19:40:14.849950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:55.952 [2024-12-05 19:40:14.849957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:55.952 [2024-12-05 19:40:14.849963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.952 [2024-12-05 19:40:14.849970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:55.952 [2024-12-05 19:40:14.849975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:21:55.952 [2024-12-05 19:40:14.849982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.952 [2024-12-05 19:40:14.850062] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:55.952 [2024-12-05 19:40:14.850076] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:59.270 [2024-12-05 19:40:17.756183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.756246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:59.270 [2024-12-05 19:40:17.756261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2906.108 ms 00:21:59.270 [2024-12-05 19:40:17.756271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.781270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.781448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:59.270 [2024-12-05 19:40:17.781466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.793 ms 00:21:59.270 [2024-12-05 19:40:17.781476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.781600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.781612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:59.270 [2024-12-05 19:40:17.781621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:59.270 [2024-12-05 19:40:17.781632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.823444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.823487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:59.270 [2024-12-05 19:40:17.823502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.770 ms 00:21:59.270 [2024-12-05 19:40:17.823513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.823556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.823566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.270 [2024-12-05 19:40:17.823575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:59.270 [2024-12-05 19:40:17.823583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.823923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.823957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.270 [2024-12-05 19:40:17.823966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:21:59.270 [2024-12-05 19:40:17.823977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.824100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.824111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.270 [2024-12-05 19:40:17.824118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:21:59.270 [2024-12-05 19:40:17.824145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.838249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.838280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.270 [2024-12-05 19:40:17.838289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.082 ms 00:21:59.270 [2024-12-05 19:40:17.838298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.849624] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:59.270 [2024-12-05 19:40:17.863568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.863600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:59.270 [2024-12-05 19:40:17.863615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.176 ms 00:21:59.270 [2024-12-05 19:40:17.863623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.912095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.912148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:59.270 [2024-12-05 19:40:17.912163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.436 ms 00:21:59.270 [2024-12-05 19:40:17.912171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.912357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.912368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:59.270 [2024-12-05 19:40:17.912400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:21:59.270 [2024-12-05 19:40:17.912408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.935196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.935327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:59.270 [2024-12-05 19:40:17.935347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.734 ms 00:21:59.270 [2024-12-05 19:40:17.935356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.957524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.957555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:59.270 [2024-12-05 19:40:17.957567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.130 ms 00:21:59.270 [2024-12-05 19:40:17.957575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:17.958152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:17.958200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:59.270 [2024-12-05 19:40:17.958213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:21:59.270 [2024-12-05 19:40:17.958221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:18.021706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:18.021858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:59.270 [2024-12-05 19:40:18.021882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.447 ms 00:21:59.270 [2024-12-05 19:40:18.021890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:18.045580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:18.045615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:59.270 [2024-12-05 19:40:18.045629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.604 ms 00:21:59.270 [2024-12-05 19:40:18.045637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:18.067973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:18.068006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:59.270 [2024-12-05 19:40:18.068019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.297 ms 00:21:59.270 [2024-12-05 19:40:18.068026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:18.090741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:18.090773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:59.270 [2024-12-05 19:40:18.090786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.676 ms 00:21:59.270 [2024-12-05 19:40:18.090794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:18.090836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:18.090845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:59.270 [2024-12-05 19:40:18.090859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:59.270 [2024-12-05 19:40:18.090867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.270 [2024-12-05 19:40:18.090947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.270 [2024-12-05 19:40:18.090957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:59.270 [2024-12-05 19:40:18.090967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:59.270 [2024-12-05 19:40:18.090974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.271 [2024-12-05 19:40:18.091831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3252.205 ms, result 0 00:21:59.271 { 00:21:59.271 "name": "ftl0", 00:21:59.271 "uuid": "213c74dd-8313-4d53-aa0d-48a9cc5e2d6d" 00:21:59.271 } 00:21:59.271 19:40:18 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:59.271 19:40:18 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:59.271 19:40:18 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:59.271 19:40:18 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:21:59.271 19:40:18 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:59.271 19:40:18 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:59.271 19:40:18 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:59.530 19:40:18 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:59.530 [ 00:21:59.530 { 00:21:59.530 "name": "ftl0", 00:21:59.530 "aliases": [ 00:21:59.530 "213c74dd-8313-4d53-aa0d-48a9cc5e2d6d" 00:21:59.530 ], 00:21:59.530 "product_name": "FTL disk", 00:21:59.530 "block_size": 4096, 00:21:59.530 "num_blocks": 20971520, 00:21:59.530 "uuid": "213c74dd-8313-4d53-aa0d-48a9cc5e2d6d", 00:21:59.530 "assigned_rate_limits": { 00:21:59.530 "rw_ios_per_sec": 0, 00:21:59.530 "rw_mbytes_per_sec": 0, 00:21:59.530 "r_mbytes_per_sec": 0, 00:21:59.530 "w_mbytes_per_sec": 0 00:21:59.530 }, 00:21:59.530 "claimed": false, 00:21:59.530 "zoned": false, 00:21:59.530 "supported_io_types": { 00:21:59.530 "read": true, 00:21:59.530 "write": true, 00:21:59.530 "unmap": true, 00:21:59.530 "flush": true, 00:21:59.530 "reset": false, 00:21:59.530 "nvme_admin": false, 00:21:59.530 "nvme_io": false, 00:21:59.530 "nvme_io_md": false, 00:21:59.530 "write_zeroes": true, 00:21:59.530 "zcopy": false, 00:21:59.530 "get_zone_info": false, 00:21:59.530 "zone_management": false, 00:21:59.530 "zone_append": false, 00:21:59.530 "compare": false, 00:21:59.530 "compare_and_write": false, 00:21:59.530 "abort": false, 00:21:59.530 "seek_hole": false, 00:21:59.530 "seek_data": false, 00:21:59.530 "copy": false, 00:21:59.530 "nvme_iov_md": false 00:21:59.530 }, 00:21:59.530 "driver_specific": { 00:21:59.530 "ftl": { 00:21:59.530 "base_bdev": "a1f706c4-24bd-4388-9fe3-b4934a6aaeeb", 00:21:59.530 "cache": "nvc0n1p0" 00:21:59.530 } 00:21:59.530 } 00:21:59.530 } 00:21:59.530 ] 00:21:59.530 19:40:18 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:21:59.530 19:40:18 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:59.530 19:40:18 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:59.789 19:40:18 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:59.789 19:40:18 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:00.051 [2024-12-05 19:40:18.896500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.896637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:00.051 [2024-12-05 19:40:18.896655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:00.051 [2024-12-05 19:40:18.896669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:18.896703] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:00.051 [2024-12-05 19:40:18.899336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.899366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:00.051 [2024-12-05 19:40:18.899378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:22:00.051 [2024-12-05 19:40:18.899386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:18.899786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.899800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:00.051 [2024-12-05 19:40:18.899809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:22:00.051 [2024-12-05 19:40:18.899817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:18.903055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.903164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:00.051 [2024-12-05 19:40:18.903180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:22:00.051 [2024-12-05 19:40:18.903188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:18.909373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.909399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:00.051 [2024-12-05 19:40:18.909411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.161 ms 00:22:00.051 [2024-12-05 19:40:18.909419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:18.932929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.932962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:00.051 [2024-12-05 19:40:18.932986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.430 ms 00:22:00.051 [2024-12-05 19:40:18.932994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:18.947541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.947573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:00.051 [2024-12-05 19:40:18.947589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.505 ms 00:22:00.051 [2024-12-05 19:40:18.947597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:18.947771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.947782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:00.051 [2024-12-05 19:40:18.947792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:22:00.051 [2024-12-05 19:40:18.947799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:18.970317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.970429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:00.051 [2024-12-05 19:40:18.970447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.490 ms 00:22:00.051 [2024-12-05 19:40:18.970454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:18.992966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:18.993005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:00.051 [2024-12-05 19:40:18.993017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.474 ms 00:22:00.051 [2024-12-05 19:40:18.993024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:19.015023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:19.015054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:00.051 [2024-12-05 19:40:19.015065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.956 ms 00:22:00.051 [2024-12-05 19:40:19.015072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:19.037270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.051 [2024-12-05 19:40:19.037379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:00.051 [2024-12-05 19:40:19.037397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.094 ms 00:22:00.051 [2024-12-05 19:40:19.037404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.051 [2024-12-05 19:40:19.037439] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:00.051 [2024-12-05 19:40:19.037451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:00.051 [2024-12-05 19:40:19.037462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:00.051 [2024-12-05 19:40:19.037470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:00.051 [2024-12-05 19:40:19.037479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:00.051 [2024-12-05 19:40:19.037486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:00.051 [2024-12-05 19:40:19.037496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:00.051 [2024-12-05 19:40:19.037503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:00.051 [2024-12-05 19:40:19.037514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:00.051 [2024-12-05 19:40:19.037522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:00.051 [2024-12-05 19:40:19.037531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.037991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:00.052 [2024-12-05 19:40:19.038310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:00.053 [2024-12-05 19:40:19.038320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:00.053 [2024-12-05 19:40:19.038338] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:00.053 [2024-12-05 19:40:19.038348] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 213c74dd-8313-4d53-aa0d-48a9cc5e2d6d 00:22:00.053 [2024-12-05 19:40:19.038355] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:00.053 [2024-12-05 19:40:19.038365] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:00.053 [2024-12-05 19:40:19.038374] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:00.053 [2024-12-05 19:40:19.038383] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:00.053 [2024-12-05 19:40:19.038390] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:00.053 [2024-12-05 19:40:19.038399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:00.053 [2024-12-05 19:40:19.038406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:00.053 [2024-12-05 19:40:19.038416] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:00.053 [2024-12-05 19:40:19.038422] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:00.053 [2024-12-05 19:40:19.038431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.053 [2024-12-05 19:40:19.038438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:00.053 [2024-12-05 19:40:19.038448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.993 ms 00:22:00.053 [2024-12-05 19:40:19.038455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.053 [2024-12-05 19:40:19.050830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.053 [2024-12-05 19:40:19.050857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:00.053 [2024-12-05 19:40:19.050868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.337 ms 00:22:00.053 [2024-12-05 19:40:19.050876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.053 [2024-12-05 19:40:19.051259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:00.053 [2024-12-05 19:40:19.051277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:00.053 [2024-12-05 19:40:19.051287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:22:00.053 [2024-12-05 19:40:19.051294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.094571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.094614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:00.314 [2024-12-05 19:40:19.094626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.094634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.094693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.094701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:00.314 [2024-12-05 19:40:19.094711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.094718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.094808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.094818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:00.314 [2024-12-05 19:40:19.094828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.094835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.094861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.094869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:00.314 [2024-12-05 19:40:19.094878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.094885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.176198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.176358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.314 [2024-12-05 19:40:19.176380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.176388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.238593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.238629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.314 [2024-12-05 19:40:19.238642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.238649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.238718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.238730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.314 [2024-12-05 19:40:19.238739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.238747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.238816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.238825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.314 [2024-12-05 19:40:19.238834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.238841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.238941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.238960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.314 [2024-12-05 19:40:19.238971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.238978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.239028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.239037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.314 [2024-12-05 19:40:19.239046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.239053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.239097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.239105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.314 [2024-12-05 19:40:19.239114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.239123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.239206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.314 [2024-12-05 19:40:19.239217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.314 [2024-12-05 19:40:19.239226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.314 [2024-12-05 19:40:19.239234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.314 [2024-12-05 19:40:19.239385] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.860 ms, result 0 00:22:00.314 true 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 75350 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 75350 ']' 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 75350 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75350 00:22:00.314 killing process with pid 75350 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75350' 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 75350 00:22:00.314 19:40:19 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 75350 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:04.518 19:40:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:22:04.518 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:22:04.518 fio-3.35 00:22:04.518 Starting 1 thread 00:22:09.814 00:22:09.814 test: (groupid=0, jobs=1): err= 0: pid=75540: Thu Dec 5 19:40:27 2024 00:22:09.814 read: IOPS=1085, BW=72.1MiB/s (75.6MB/s)(255MiB/3531msec) 00:22:09.814 slat (nsec): min=3076, max=30132, avg=4493.01, stdev=2469.48 00:22:09.814 clat (usec): min=241, max=124704, avg=425.85, stdev=2012.97 00:22:09.814 lat (usec): min=244, max=124709, avg=430.35, stdev=2013.06 00:22:09.814 clat percentiles (usec): 00:22:09.814 | 1.00th=[ 273], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 310], 00:22:09.814 | 30.00th=[ 314], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:22:09.814 | 70.00th=[ 416], 80.00th=[ 502], 90.00th=[ 545], 95.00th=[ 644], 00:22:09.814 | 99.00th=[ 1020], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1237], 00:22:09.814 | 99.99th=[124257] 00:22:09.814 write: IOPS=1092, BW=72.6MiB/s (76.1MB/s)(256MiB/3528msec); 0 zone resets 00:22:09.814 slat (nsec): min=13469, max=67073, avg=18671.09, stdev=4611.98 00:22:09.814 clat (usec): min=262, max=2129, avg=455.92, stdev=201.41 00:22:09.814 lat (usec): min=296, max=2155, avg=474.59, stdev=203.98 00:22:09.814 clat percentiles (usec): 00:22:09.814 | 1.00th=[ 310], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 334], 00:22:09.814 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 400], 00:22:09.814 | 70.00th=[ 469], 80.00th=[ 603], 90.00th=[ 668], 95.00th=[ 824], 00:22:09.814 | 99.00th=[ 1172], 99.50th=[ 1631], 99.90th=[ 1991], 99.95th=[ 2089], 00:22:09.814 | 99.99th=[ 2114] 00:22:09.814 bw ( KiB/s): min=32640, max=96560, per=99.73%, avg=74120.00, stdev=24121.51, samples=7 00:22:09.814 iops : min= 480, max= 1420, avg=1090.00, stdev=354.73, samples=7 00:22:09.814 lat (usec) : 250=0.03%, 500=76.30%, 750=18.96%, 1000=2.72% 00:22:09.814 lat (msec) : 2=1.94%, 4=0.04%, 250=0.01% 00:22:09.814 cpu : usr=99.07%, sys=0.20%, ctx=5, majf=0, minf=1169 00:22:09.815 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:09.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.815 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:09.815 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:09.815 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:09.815 00:22:09.815 Run status group 0 (all jobs): 00:22:09.815 READ: bw=72.1MiB/s (75.6MB/s), 72.1MiB/s-72.1MiB/s (75.6MB/s-75.6MB/s), io=255MiB (267MB), run=3531-3531msec 00:22:09.815 WRITE: bw=72.6MiB/s (76.1MB/s), 72.6MiB/s-72.6MiB/s (76.1MB/s-76.1MB/s), io=256MiB (269MB), run=3528-3528msec 00:22:10.756 ----------------------------------------------------- 00:22:10.756 Suppressions used: 00:22:10.756 count bytes template 00:22:10.756 1 5 /usr/src/fio/parse.c 00:22:10.756 1 8 libtcmalloc_minimal.so 00:22:10.756 1 904 libcrypto.so 00:22:10.756 ----------------------------------------------------- 00:22:10.756 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:10.756 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:10.757 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:10.757 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:10.757 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:10.757 19:40:29 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:22:10.757 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:10.757 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:10.757 fio-3.35 00:22:10.757 Starting 2 threads 00:22:37.336 00:22:37.336 first_half: (groupid=0, jobs=1): err= 0: pid=75637: Thu Dec 5 19:40:52 2024 00:22:37.336 read: IOPS=2998, BW=11.7MiB/s (12.3MB/s)(255MiB/21782msec) 00:22:37.336 slat (nsec): min=3107, max=24530, avg=3819.73, stdev=707.12 00:22:37.336 clat (usec): min=594, max=281570, avg=33911.90, stdev=15740.41 00:22:37.336 lat (usec): min=598, max=281574, avg=33915.72, stdev=15740.42 00:22:37.336 clat percentiles (msec): 00:22:37.336 | 1.00th=[ 16], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:22:37.336 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:22:37.336 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 45], 00:22:37.336 | 99.00th=[ 121], 99.50th=[ 142], 99.90th=[ 171], 99.95th=[ 213], 00:22:37.336 | 99.99th=[ 275] 00:22:37.336 write: IOPS=3634, BW=14.2MiB/s (14.9MB/s)(256MiB/18030msec); 0 zone resets 00:22:37.336 slat (usec): min=3, max=559, avg= 5.33, stdev= 4.03 00:22:37.336 clat (usec): min=352, max=87512, avg=8732.00, stdev=14560.83 00:22:37.336 lat (usec): min=361, max=87517, avg=8737.32, stdev=14560.89 00:22:37.336 clat percentiles (usec): 00:22:37.336 | 1.00th=[ 660], 5.00th=[ 750], 10.00th=[ 881], 20.00th=[ 1598], 00:22:37.336 | 30.00th=[ 2769], 40.00th=[ 3621], 50.00th=[ 4359], 60.00th=[ 5014], 00:22:37.336 | 70.00th=[ 5866], 80.00th=[10159], 90.00th=[16712], 95.00th=[37487], 00:22:37.336 | 99.00th=[78119], 99.50th=[81265], 99.90th=[84411], 99.95th=[85459], 00:22:37.336 | 99.99th=[86508] 00:22:37.336 bw ( KiB/s): min= 1416, max=42384, per=94.62%, avg=24966.10, stdev=12317.50, samples=21 00:22:37.336 iops : min= 354, max=10596, avg=6241.43, stdev=3079.48, samples=21 00:22:37.336 lat (usec) : 500=0.03%, 750=2.42%, 1000=3.82% 00:22:37.336 lat (msec) : 2=5.73%, 4=10.52%, 10=17.71%, 20=6.85%, 50=48.46% 00:22:37.336 lat (msec) : 100=3.62%, 250=0.84%, 500=0.02% 00:22:37.336 cpu : usr=99.28%, sys=0.11%, ctx=37, majf=0, minf=5593 00:22:37.336 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:37.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.336 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:37.336 issued rwts: total=65309,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.336 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:37.336 second_half: (groupid=0, jobs=1): err= 0: pid=75638: Thu Dec 5 19:40:52 2024 00:22:37.336 read: IOPS=2978, BW=11.6MiB/s (12.2MB/s)(255MiB/21927msec) 00:22:37.336 slat (nsec): min=3093, max=20009, avg=3865.83, stdev=691.55 00:22:37.336 clat (usec): min=603, max=286413, avg=33456.27, stdev=18051.40 00:22:37.336 lat (usec): min=608, max=286418, avg=33460.14, stdev=18051.43 00:22:37.336 clat percentiles (msec): 00:22:37.336 | 1.00th=[ 7], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:22:37.336 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:22:37.336 | 70.00th=[ 32], 80.00th=[ 35], 90.00th=[ 36], 95.00th=[ 45], 00:22:37.336 | 99.00th=[ 129], 99.50th=[ 150], 99.90th=[ 182], 99.95th=[ 209], 00:22:37.336 | 99.99th=[ 279] 00:22:37.336 write: IOPS=3298, BW=12.9MiB/s (13.5MB/s)(256MiB/19870msec); 0 zone resets 00:22:37.336 slat (usec): min=3, max=897, avg= 5.39, stdev= 5.47 00:22:37.336 clat (usec): min=371, max=87747, avg=9468.79, stdev=15570.69 00:22:37.336 lat (usec): min=383, max=87752, avg=9474.17, stdev=15570.76 00:22:37.336 clat percentiles (usec): 00:22:37.336 | 1.00th=[ 660], 5.00th=[ 758], 10.00th=[ 832], 20.00th=[ 1057], 00:22:37.336 | 30.00th=[ 1500], 40.00th=[ 2802], 50.00th=[ 3654], 60.00th=[ 4817], 00:22:37.336 | 70.00th=[ 6521], 80.00th=[12518], 90.00th=[27132], 95.00th=[39584], 00:22:37.336 | 99.00th=[78119], 99.50th=[82314], 99.90th=[85459], 99.95th=[86508], 00:22:37.336 | 99.99th=[86508] 00:22:37.336 bw ( KiB/s): min= 1760, max=62544, per=86.40%, avg=22797.87, stdev=15627.11, samples=23 00:22:37.336 iops : min= 440, max=15636, avg=5699.43, stdev=3906.76, samples=23 00:22:37.336 lat (usec) : 500=0.01%, 750=2.17%, 1000=7.00% 00:22:37.336 lat (msec) : 2=7.85%, 4=9.70%, 10=13.22%, 20=5.98%, 50=49.41% 00:22:37.336 lat (msec) : 100=3.49%, 250=1.16%, 500=0.01% 00:22:37.336 cpu : usr=99.46%, sys=0.10%, ctx=45, majf=0, minf=5514 00:22:37.336 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:37.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:37.336 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:37.336 issued rwts: total=65313,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:37.336 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:37.336 00:22:37.336 Run status group 0 (all jobs): 00:22:37.336 READ: bw=23.3MiB/s (24.4MB/s), 11.6MiB/s-11.7MiB/s (12.2MB/s-12.3MB/s), io=510MiB (535MB), run=21782-21927msec 00:22:37.337 WRITE: bw=25.8MiB/s (27.0MB/s), 12.9MiB/s-14.2MiB/s (13.5MB/s-14.9MB/s), io=512MiB (537MB), run=18030-19870msec 00:22:37.337 ----------------------------------------------------- 00:22:37.337 Suppressions used: 00:22:37.337 count bytes template 00:22:37.337 2 10 /usr/src/fio/parse.c 00:22:37.337 4 384 /usr/src/fio/iolog.c 00:22:37.337 1 8 libtcmalloc_minimal.so 00:22:37.337 1 904 libcrypto.so 00:22:37.337 ----------------------------------------------------- 00:22:37.337 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:22:37.337 19:40:55 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:22:37.337 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:22:37.337 fio-3.35 00:22:37.337 Starting 1 thread 00:22:52.299 00:22:52.299 test: (groupid=0, jobs=1): err= 0: pid=75940: Thu Dec 5 19:41:08 2024 00:22:52.299 read: IOPS=7990, BW=31.2MiB/s (32.7MB/s)(255MiB/8160msec) 00:22:52.299 slat (nsec): min=3096, max=17834, avg=3585.90, stdev=734.45 00:22:52.299 clat (usec): min=477, max=74025, avg=16010.78, stdev=2504.11 00:22:52.299 lat (usec): min=484, max=74029, avg=16014.37, stdev=2504.13 00:22:52.299 clat percentiles (usec): 00:22:52.299 | 1.00th=[14484], 5.00th=[14746], 10.00th=[14877], 20.00th=[15008], 00:22:52.299 | 30.00th=[15139], 40.00th=[15270], 50.00th=[15401], 60.00th=[15533], 00:22:52.299 | 70.00th=[15795], 80.00th=[16057], 90.00th=[17433], 95.00th=[20317], 00:22:52.299 | 99.00th=[24249], 99.50th=[30540], 99.90th=[42206], 99.95th=[43779], 00:22:52.299 | 99.99th=[66847] 00:22:52.299 write: IOPS=16.2k, BW=63.4MiB/s (66.5MB/s)(256MiB/4038msec); 0 zone resets 00:22:52.299 slat (usec): min=4, max=108, avg= 5.48, stdev= 2.09 00:22:52.299 clat (usec): min=488, max=45397, avg=7844.98, stdev=9682.18 00:22:52.299 lat (usec): min=493, max=45402, avg=7850.46, stdev=9682.18 00:22:52.299 clat percentiles (usec): 00:22:52.299 | 1.00th=[ 619], 5.00th=[ 693], 10.00th=[ 750], 20.00th=[ 922], 00:22:52.299 | 30.00th=[ 1074], 40.00th=[ 1516], 50.00th=[ 5342], 60.00th=[ 6063], 00:22:52.299 | 70.00th=[ 7111], 80.00th=[ 8717], 90.00th=[27919], 95.00th=[29754], 00:22:52.299 | 99.00th=[33817], 99.50th=[35390], 99.90th=[38011], 99.95th=[38536], 00:22:52.299 | 99.99th=[44303] 00:22:52.299 bw ( KiB/s): min= 3000, max=88248, per=89.73%, avg=58254.22, stdev=23334.06, samples=9 00:22:52.299 iops : min= 750, max=22062, avg=14563.56, stdev=5833.52, samples=9 00:22:52.299 lat (usec) : 500=0.01%, 750=4.97%, 1000=7.83% 00:22:52.299 lat (msec) : 2=7.81%, 4=0.59%, 10=20.48%, 20=47.59%, 50=10.71% 00:22:52.299 lat (msec) : 100=0.01% 00:22:52.299 cpu : usr=99.16%, sys=0.16%, ctx=23, majf=0, minf=5565 00:22:52.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:52.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:52.299 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:52.299 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:52.299 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:52.299 00:22:52.299 Run status group 0 (all jobs): 00:22:52.299 READ: bw=31.2MiB/s (32.7MB/s), 31.2MiB/s-31.2MiB/s (32.7MB/s-32.7MB/s), io=255MiB (267MB), run=8160-8160msec 00:22:52.299 WRITE: bw=63.4MiB/s (66.5MB/s), 63.4MiB/s-63.4MiB/s (66.5MB/s-66.5MB/s), io=256MiB (268MB), run=4038-4038msec 00:22:52.299 ----------------------------------------------------- 00:22:52.299 Suppressions used: 00:22:52.299 count bytes template 00:22:52.299 1 5 /usr/src/fio/parse.c 00:22:52.299 2 192 /usr/src/fio/iolog.c 00:22:52.299 1 8 libtcmalloc_minimal.so 00:22:52.299 1 904 libcrypto.so 00:22:52.299 ----------------------------------------------------- 00:22:52.299 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:52.299 Remove shared memory files 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57211 /dev/shm/spdk_tgt_trace.pid74269 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:52.299 ************************************ 00:22:52.299 END TEST ftl_fio_basic 00:22:52.299 ************************************ 00:22:52.299 00:22:52.299 real 0m59.072s 00:22:52.299 user 2m2.397s 00:22:52.299 sys 0m7.159s 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:52.299 19:41:10 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:52.299 19:41:10 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:52.299 19:41:10 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:52.299 19:41:10 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:52.299 19:41:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:52.299 ************************************ 00:22:52.299 START TEST ftl_bdevperf 00:22:52.299 ************************************ 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:52.299 * Looking for test storage... 00:22:52.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:52.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.299 --rc genhtml_branch_coverage=1 00:22:52.299 --rc genhtml_function_coverage=1 00:22:52.299 --rc genhtml_legend=1 00:22:52.299 --rc geninfo_all_blocks=1 00:22:52.299 --rc geninfo_unexecuted_blocks=1 00:22:52.299 00:22:52.299 ' 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:52.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.299 --rc genhtml_branch_coverage=1 00:22:52.299 --rc genhtml_function_coverage=1 00:22:52.299 --rc genhtml_legend=1 00:22:52.299 --rc geninfo_all_blocks=1 00:22:52.299 --rc geninfo_unexecuted_blocks=1 00:22:52.299 00:22:52.299 ' 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:52.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.299 --rc genhtml_branch_coverage=1 00:22:52.299 --rc genhtml_function_coverage=1 00:22:52.299 --rc genhtml_legend=1 00:22:52.299 --rc geninfo_all_blocks=1 00:22:52.299 --rc geninfo_unexecuted_blocks=1 00:22:52.299 00:22:52.299 ' 00:22:52.299 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:52.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.300 --rc genhtml_branch_coverage=1 00:22:52.300 --rc genhtml_function_coverage=1 00:22:52.300 --rc genhtml_legend=1 00:22:52.300 --rc geninfo_all_blocks=1 00:22:52.300 --rc geninfo_unexecuted_blocks=1 00:22:52.300 00:22:52.300 ' 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=76161 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 76161 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 76161 ']' 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.300 19:41:10 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:52.300 [2024-12-05 19:41:10.544351] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:22:52.300 [2024-12-05 19:41:10.544748] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76161 ] 00:22:52.300 [2024-12-05 19:41:10.711670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.300 [2024-12-05 19:41:10.811002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.561 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.561 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:52.561 19:41:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:52.561 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:22:52.561 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:52.561 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:22:52.561 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:22:52.561 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:52.822 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:52.822 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:22:52.822 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:52.822 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:52.822 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:52.822 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:52.822 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:52.822 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:53.083 { 00:22:53.083 "name": "nvme0n1", 00:22:53.083 "aliases": [ 00:22:53.083 "5434684c-5030-439d-941b-b962ab11e348" 00:22:53.083 ], 00:22:53.083 "product_name": "NVMe disk", 00:22:53.083 "block_size": 4096, 00:22:53.083 "num_blocks": 1310720, 00:22:53.083 "uuid": "5434684c-5030-439d-941b-b962ab11e348", 00:22:53.083 "numa_id": -1, 00:22:53.083 "assigned_rate_limits": { 00:22:53.083 "rw_ios_per_sec": 0, 00:22:53.083 "rw_mbytes_per_sec": 0, 00:22:53.083 "r_mbytes_per_sec": 0, 00:22:53.083 "w_mbytes_per_sec": 0 00:22:53.083 }, 00:22:53.083 "claimed": true, 00:22:53.083 "claim_type": "read_many_write_one", 00:22:53.083 "zoned": false, 00:22:53.083 "supported_io_types": { 00:22:53.083 "read": true, 00:22:53.083 "write": true, 00:22:53.083 "unmap": true, 00:22:53.083 "flush": true, 00:22:53.083 "reset": true, 00:22:53.083 "nvme_admin": true, 00:22:53.083 "nvme_io": true, 00:22:53.083 "nvme_io_md": false, 00:22:53.083 "write_zeroes": true, 00:22:53.083 "zcopy": false, 00:22:53.083 "get_zone_info": false, 00:22:53.083 "zone_management": false, 00:22:53.083 "zone_append": false, 00:22:53.083 "compare": true, 00:22:53.083 "compare_and_write": false, 00:22:53.083 "abort": true, 00:22:53.083 "seek_hole": false, 00:22:53.083 "seek_data": false, 00:22:53.083 "copy": true, 00:22:53.083 "nvme_iov_md": false 00:22:53.083 }, 00:22:53.083 "driver_specific": { 00:22:53.083 "nvme": [ 00:22:53.083 { 00:22:53.083 "pci_address": "0000:00:11.0", 00:22:53.083 "trid": { 00:22:53.083 "trtype": "PCIe", 00:22:53.083 "traddr": "0000:00:11.0" 00:22:53.083 }, 00:22:53.083 "ctrlr_data": { 00:22:53.083 "cntlid": 0, 00:22:53.083 "vendor_id": "0x1b36", 00:22:53.083 "model_number": "QEMU NVMe Ctrl", 00:22:53.083 "serial_number": "12341", 00:22:53.083 "firmware_revision": "8.0.0", 00:22:53.083 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:53.083 "oacs": { 00:22:53.083 "security": 0, 00:22:53.083 "format": 1, 00:22:53.083 "firmware": 0, 00:22:53.083 "ns_manage": 1 00:22:53.083 }, 00:22:53.083 "multi_ctrlr": false, 00:22:53.083 "ana_reporting": false 00:22:53.083 }, 00:22:53.083 "vs": { 00:22:53.083 "nvme_version": "1.4" 00:22:53.083 }, 00:22:53.083 "ns_data": { 00:22:53.083 "id": 1, 00:22:53.083 "can_share": false 00:22:53.083 } 00:22:53.083 } 00:22:53.083 ], 00:22:53.083 "mp_policy": "active_passive" 00:22:53.083 } 00:22:53.083 } 00:22:53.083 ]' 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:53.083 19:41:11 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:53.344 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=1b769824-a594-49b2-97d1-d97750493f89 00:22:53.344 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:53.344 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1b769824-a594-49b2-97d1-d97750493f89 00:22:53.603 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:53.603 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=e291c6bc-328c-4d8b-abd4-e15bb702b2b9 00:22:53.603 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e291c6bc-328c-4d8b-abd4-e15bb702b2b9 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:53.919 19:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:54.180 19:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:54.180 { 00:22:54.180 "name": "bcb6e3ef-3abb-45e9-8f62-68d118c980d8", 00:22:54.180 "aliases": [ 00:22:54.180 "lvs/nvme0n1p0" 00:22:54.180 ], 00:22:54.180 "product_name": "Logical Volume", 00:22:54.180 "block_size": 4096, 00:22:54.180 "num_blocks": 26476544, 00:22:54.180 "uuid": "bcb6e3ef-3abb-45e9-8f62-68d118c980d8", 00:22:54.180 "assigned_rate_limits": { 00:22:54.180 "rw_ios_per_sec": 0, 00:22:54.180 "rw_mbytes_per_sec": 0, 00:22:54.180 "r_mbytes_per_sec": 0, 00:22:54.180 "w_mbytes_per_sec": 0 00:22:54.180 }, 00:22:54.180 "claimed": false, 00:22:54.180 "zoned": false, 00:22:54.180 "supported_io_types": { 00:22:54.180 "read": true, 00:22:54.180 "write": true, 00:22:54.180 "unmap": true, 00:22:54.180 "flush": false, 00:22:54.180 "reset": true, 00:22:54.180 "nvme_admin": false, 00:22:54.180 "nvme_io": false, 00:22:54.180 "nvme_io_md": false, 00:22:54.180 "write_zeroes": true, 00:22:54.180 "zcopy": false, 00:22:54.180 "get_zone_info": false, 00:22:54.180 "zone_management": false, 00:22:54.180 "zone_append": false, 00:22:54.180 "compare": false, 00:22:54.180 "compare_and_write": false, 00:22:54.180 "abort": false, 00:22:54.180 "seek_hole": true, 00:22:54.180 "seek_data": true, 00:22:54.180 "copy": false, 00:22:54.180 "nvme_iov_md": false 00:22:54.180 }, 00:22:54.180 "driver_specific": { 00:22:54.180 "lvol": { 00:22:54.180 "lvol_store_uuid": "e291c6bc-328c-4d8b-abd4-e15bb702b2b9", 00:22:54.180 "base_bdev": "nvme0n1", 00:22:54.180 "thin_provision": true, 00:22:54.180 "num_allocated_clusters": 0, 00:22:54.180 "snapshot": false, 00:22:54.180 "clone": false, 00:22:54.180 "esnap_clone": false 00:22:54.180 } 00:22:54.180 } 00:22:54.180 } 00:22:54.180 ]' 00:22:54.181 19:41:12 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:54.181 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:54.181 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:54.181 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:54.181 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:54.181 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:54.181 19:41:13 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:54.181 19:41:13 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:54.181 19:41:13 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:54.441 19:41:13 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:54.441 19:41:13 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:54.441 19:41:13 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:54.441 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:54.441 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:54.441 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:54.441 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:54.441 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:54.702 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:54.702 { 00:22:54.702 "name": "bcb6e3ef-3abb-45e9-8f62-68d118c980d8", 00:22:54.702 "aliases": [ 00:22:54.702 "lvs/nvme0n1p0" 00:22:54.702 ], 00:22:54.702 "product_name": "Logical Volume", 00:22:54.702 "block_size": 4096, 00:22:54.702 "num_blocks": 26476544, 00:22:54.702 "uuid": "bcb6e3ef-3abb-45e9-8f62-68d118c980d8", 00:22:54.702 "assigned_rate_limits": { 00:22:54.702 "rw_ios_per_sec": 0, 00:22:54.702 "rw_mbytes_per_sec": 0, 00:22:54.702 "r_mbytes_per_sec": 0, 00:22:54.702 "w_mbytes_per_sec": 0 00:22:54.702 }, 00:22:54.702 "claimed": false, 00:22:54.702 "zoned": false, 00:22:54.702 "supported_io_types": { 00:22:54.702 "read": true, 00:22:54.702 "write": true, 00:22:54.702 "unmap": true, 00:22:54.702 "flush": false, 00:22:54.702 "reset": true, 00:22:54.702 "nvme_admin": false, 00:22:54.702 "nvme_io": false, 00:22:54.702 "nvme_io_md": false, 00:22:54.702 "write_zeroes": true, 00:22:54.702 "zcopy": false, 00:22:54.702 "get_zone_info": false, 00:22:54.702 "zone_management": false, 00:22:54.702 "zone_append": false, 00:22:54.702 "compare": false, 00:22:54.702 "compare_and_write": false, 00:22:54.702 "abort": false, 00:22:54.702 "seek_hole": true, 00:22:54.702 "seek_data": true, 00:22:54.702 "copy": false, 00:22:54.702 "nvme_iov_md": false 00:22:54.702 }, 00:22:54.702 "driver_specific": { 00:22:54.702 "lvol": { 00:22:54.702 "lvol_store_uuid": "e291c6bc-328c-4d8b-abd4-e15bb702b2b9", 00:22:54.702 "base_bdev": "nvme0n1", 00:22:54.702 "thin_provision": true, 00:22:54.702 "num_allocated_clusters": 0, 00:22:54.702 "snapshot": false, 00:22:54.702 "clone": false, 00:22:54.702 "esnap_clone": false 00:22:54.702 } 00:22:54.702 } 00:22:54.702 } 00:22:54.702 ]' 00:22:54.702 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:54.702 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:54.702 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:54.702 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:54.702 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:54.702 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:54.702 19:41:13 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:54.702 19:41:13 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:54.964 19:41:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:22:54.964 19:41:13 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:54.964 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:54.964 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:54.964 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:54.964 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:54.964 19:41:13 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b bcb6e3ef-3abb-45e9-8f62-68d118c980d8 00:22:55.224 19:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:55.224 { 00:22:55.224 "name": "bcb6e3ef-3abb-45e9-8f62-68d118c980d8", 00:22:55.224 "aliases": [ 00:22:55.224 "lvs/nvme0n1p0" 00:22:55.224 ], 00:22:55.224 "product_name": "Logical Volume", 00:22:55.224 "block_size": 4096, 00:22:55.224 "num_blocks": 26476544, 00:22:55.224 "uuid": "bcb6e3ef-3abb-45e9-8f62-68d118c980d8", 00:22:55.224 "assigned_rate_limits": { 00:22:55.224 "rw_ios_per_sec": 0, 00:22:55.224 "rw_mbytes_per_sec": 0, 00:22:55.224 "r_mbytes_per_sec": 0, 00:22:55.224 "w_mbytes_per_sec": 0 00:22:55.224 }, 00:22:55.224 "claimed": false, 00:22:55.224 "zoned": false, 00:22:55.224 "supported_io_types": { 00:22:55.224 "read": true, 00:22:55.224 "write": true, 00:22:55.224 "unmap": true, 00:22:55.224 "flush": false, 00:22:55.224 "reset": true, 00:22:55.224 "nvme_admin": false, 00:22:55.224 "nvme_io": false, 00:22:55.224 "nvme_io_md": false, 00:22:55.224 "write_zeroes": true, 00:22:55.224 "zcopy": false, 00:22:55.224 "get_zone_info": false, 00:22:55.224 "zone_management": false, 00:22:55.224 "zone_append": false, 00:22:55.224 "compare": false, 00:22:55.225 "compare_and_write": false, 00:22:55.225 "abort": false, 00:22:55.225 "seek_hole": true, 00:22:55.225 "seek_data": true, 00:22:55.225 "copy": false, 00:22:55.225 "nvme_iov_md": false 00:22:55.225 }, 00:22:55.225 "driver_specific": { 00:22:55.225 "lvol": { 00:22:55.225 "lvol_store_uuid": "e291c6bc-328c-4d8b-abd4-e15bb702b2b9", 00:22:55.225 "base_bdev": "nvme0n1", 00:22:55.225 "thin_provision": true, 00:22:55.225 "num_allocated_clusters": 0, 00:22:55.225 "snapshot": false, 00:22:55.225 "clone": false, 00:22:55.225 "esnap_clone": false 00:22:55.225 } 00:22:55.225 } 00:22:55.225 } 00:22:55.225 ]' 00:22:55.225 19:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:55.225 19:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:55.225 19:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:55.225 19:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:55.225 19:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:55.225 19:41:14 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:55.225 19:41:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:22:55.225 19:41:14 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d bcb6e3ef-3abb-45e9-8f62-68d118c980d8 -c nvc0n1p0 --l2p_dram_limit 20 00:22:55.486 [2024-12-05 19:41:14.275157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.486 [2024-12-05 19:41:14.275209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:55.486 [2024-12-05 19:41:14.275228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:55.486 [2024-12-05 19:41:14.275240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.486 [2024-12-05 19:41:14.275295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.486 [2024-12-05 19:41:14.275307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.486 [2024-12-05 19:41:14.275315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:55.486 [2024-12-05 19:41:14.275324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.486 [2024-12-05 19:41:14.275341] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:55.486 [2024-12-05 19:41:14.277961] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:55.486 [2024-12-05 19:41:14.277994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.486 [2024-12-05 19:41:14.278005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.486 [2024-12-05 19:41:14.278020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.658 ms 00:22:55.486 [2024-12-05 19:41:14.278030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.486 [2024-12-05 19:41:14.278095] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c9ed67bd-e6c0-48b3-87ed-7e0f8782c630 00:22:55.486 [2024-12-05 19:41:14.279160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.486 [2024-12-05 19:41:14.279190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:55.486 [2024-12-05 19:41:14.279204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:55.486 [2024-12-05 19:41:14.279212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.486 [2024-12-05 19:41:14.284252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.486 [2024-12-05 19:41:14.284281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.487 [2024-12-05 19:41:14.284292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.990 ms 00:22:55.487 [2024-12-05 19:41:14.284303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.487 [2024-12-05 19:41:14.284382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.487 [2024-12-05 19:41:14.284391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.487 [2024-12-05 19:41:14.284404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:22:55.487 [2024-12-05 19:41:14.284412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.487 [2024-12-05 19:41:14.284451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.487 [2024-12-05 19:41:14.284459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:55.487 [2024-12-05 19:41:14.284469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:55.487 [2024-12-05 19:41:14.284476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.487 [2024-12-05 19:41:14.284498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:55.487 [2024-12-05 19:41:14.288000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.487 [2024-12-05 19:41:14.288030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.487 [2024-12-05 19:41:14.288039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.511 ms 00:22:55.487 [2024-12-05 19:41:14.288050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.487 [2024-12-05 19:41:14.288081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.487 [2024-12-05 19:41:14.288090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:55.487 [2024-12-05 19:41:14.288098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:22:55.487 [2024-12-05 19:41:14.288107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.487 [2024-12-05 19:41:14.288121] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:55.487 [2024-12-05 19:41:14.288280] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:55.487 [2024-12-05 19:41:14.288292] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:55.487 [2024-12-05 19:41:14.288305] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:55.487 [2024-12-05 19:41:14.288315] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288326] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288334] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:55.487 [2024-12-05 19:41:14.288343] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:55.487 [2024-12-05 19:41:14.288350] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:55.487 [2024-12-05 19:41:14.288359] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:55.487 [2024-12-05 19:41:14.288368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.487 [2024-12-05 19:41:14.288377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:55.487 [2024-12-05 19:41:14.288384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:22:55.487 [2024-12-05 19:41:14.288393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.487 [2024-12-05 19:41:14.288481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.487 [2024-12-05 19:41:14.288495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:55.487 [2024-12-05 19:41:14.288503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:55.487 [2024-12-05 19:41:14.288513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.487 [2024-12-05 19:41:14.288614] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:55.487 [2024-12-05 19:41:14.288628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:55.487 [2024-12-05 19:41:14.288636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:55.487 [2024-12-05 19:41:14.288661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:55.487 [2024-12-05 19:41:14.288683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.487 [2024-12-05 19:41:14.288699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:55.487 [2024-12-05 19:41:14.288713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:55.487 [2024-12-05 19:41:14.288721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:55.487 [2024-12-05 19:41:14.288729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:55.487 [2024-12-05 19:41:14.288735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:55.487 [2024-12-05 19:41:14.288745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:55.487 [2024-12-05 19:41:14.288759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:55.487 [2024-12-05 19:41:14.288783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:55.487 [2024-12-05 19:41:14.288805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:55.487 [2024-12-05 19:41:14.288825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:55.487 [2024-12-05 19:41:14.288848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:55.487 [2024-12-05 19:41:14.288871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.487 [2024-12-05 19:41:14.288887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:55.487 [2024-12-05 19:41:14.288896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:55.487 [2024-12-05 19:41:14.288902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:55.487 [2024-12-05 19:41:14.288910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:55.487 [2024-12-05 19:41:14.288917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:55.487 [2024-12-05 19:41:14.288925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288931] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:55.487 [2024-12-05 19:41:14.288939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:55.487 [2024-12-05 19:41:14.288946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288953] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:55.487 [2024-12-05 19:41:14.288960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:55.487 [2024-12-05 19:41:14.288969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:55.487 [2024-12-05 19:41:14.288976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:55.487 [2024-12-05 19:41:14.288986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:55.487 [2024-12-05 19:41:14.288992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:55.487 [2024-12-05 19:41:14.289000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:55.487 [2024-12-05 19:41:14.289008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:55.487 [2024-12-05 19:41:14.289015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:55.487 [2024-12-05 19:41:14.289023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:55.487 [2024-12-05 19:41:14.289033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:55.487 [2024-12-05 19:41:14.289042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.487 [2024-12-05 19:41:14.289051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:55.487 [2024-12-05 19:41:14.289059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:55.487 [2024-12-05 19:41:14.289067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:55.487 [2024-12-05 19:41:14.289074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:55.487 [2024-12-05 19:41:14.289083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:55.487 [2024-12-05 19:41:14.289090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:55.487 [2024-12-05 19:41:14.289099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:55.488 [2024-12-05 19:41:14.289107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:55.488 [2024-12-05 19:41:14.289117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:55.488 [2024-12-05 19:41:14.289124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:55.488 [2024-12-05 19:41:14.289149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:55.488 [2024-12-05 19:41:14.289156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:55.488 [2024-12-05 19:41:14.289165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:55.488 [2024-12-05 19:41:14.289172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:55.488 [2024-12-05 19:41:14.289181] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:55.488 [2024-12-05 19:41:14.289190] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:55.488 [2024-12-05 19:41:14.289201] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:55.488 [2024-12-05 19:41:14.289208] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:55.488 [2024-12-05 19:41:14.289217] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:55.488 [2024-12-05 19:41:14.289225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:55.488 [2024-12-05 19:41:14.289234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.488 [2024-12-05 19:41:14.289241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:55.488 [2024-12-05 19:41:14.289250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.685 ms 00:22:55.488 [2024-12-05 19:41:14.289258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.488 [2024-12-05 19:41:14.289290] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:55.488 [2024-12-05 19:41:14.289300] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:58.057 [2024-12-05 19:41:16.517158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.517220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:58.057 [2024-12-05 19:41:16.517237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2227.852 ms 00:22:58.057 [2024-12-05 19:41:16.517246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.542696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.542739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:58.057 [2024-12-05 19:41:16.542753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.246 ms 00:22:58.057 [2024-12-05 19:41:16.542761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.542888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.542898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:58.057 [2024-12-05 19:41:16.542910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:22:58.057 [2024-12-05 19:41:16.542918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.586268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.586315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:58.057 [2024-12-05 19:41:16.586331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.298 ms 00:22:58.057 [2024-12-05 19:41:16.586341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.586387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.586397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:58.057 [2024-12-05 19:41:16.586407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:58.057 [2024-12-05 19:41:16.586416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.586773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.586789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:58.057 [2024-12-05 19:41:16.586800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:22:58.057 [2024-12-05 19:41:16.586807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.586922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.586931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:58.057 [2024-12-05 19:41:16.586943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:22:58.057 [2024-12-05 19:41:16.586950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.599889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.599922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:58.057 [2024-12-05 19:41:16.599934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.919 ms 00:22:58.057 [2024-12-05 19:41:16.599948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.611247] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:22:58.057 [2024-12-05 19:41:16.616250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.616283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:58.057 [2024-12-05 19:41:16.616294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.230 ms 00:22:58.057 [2024-12-05 19:41:16.616304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.673207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.673260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:58.057 [2024-12-05 19:41:16.673276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.877 ms 00:22:58.057 [2024-12-05 19:41:16.673287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.673462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.673478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:58.057 [2024-12-05 19:41:16.673486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:22:58.057 [2024-12-05 19:41:16.673497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.697346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.697385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:58.057 [2024-12-05 19:41:16.697397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.806 ms 00:22:58.057 [2024-12-05 19:41:16.697407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.726372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.726412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:58.057 [2024-12-05 19:41:16.726425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.931 ms 00:22:58.057 [2024-12-05 19:41:16.726435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.726987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.727004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:58.057 [2024-12-05 19:41:16.727013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:22:58.057 [2024-12-05 19:41:16.727022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.800359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.800414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:58.057 [2024-12-05 19:41:16.800426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.306 ms 00:22:58.057 [2024-12-05 19:41:16.800436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.825306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.825485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:58.057 [2024-12-05 19:41:16.825505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.799 ms 00:22:58.057 [2024-12-05 19:41:16.825514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.849611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.849652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:58.057 [2024-12-05 19:41:16.849663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.065 ms 00:22:58.057 [2024-12-05 19:41:16.849671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.873603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.873653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:58.057 [2024-12-05 19:41:16.873663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.899 ms 00:22:58.057 [2024-12-05 19:41:16.873673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.873709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.873721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:58.057 [2024-12-05 19:41:16.873729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:58.057 [2024-12-05 19:41:16.873739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.873811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:58.057 [2024-12-05 19:41:16.873823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:58.057 [2024-12-05 19:41:16.873832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:58.057 [2024-12-05 19:41:16.873841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:58.057 [2024-12-05 19:41:16.874680] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2599.134 ms, result 0 00:22:58.057 { 00:22:58.057 "name": "ftl0", 00:22:58.057 "uuid": "c9ed67bd-e6c0-48b3-87ed-7e0f8782c630" 00:22:58.057 } 00:22:58.058 19:41:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:22:58.058 19:41:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:22:58.058 19:41:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:22:58.319 19:41:17 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:22:58.319 [2024-12-05 19:41:17.191034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:58.319 I/O size of 69632 is greater than zero copy threshold (65536). 00:22:58.319 Zero copy mechanism will not be used. 00:22:58.319 Running I/O for 4 seconds... 00:23:00.205 1183.00 IOPS, 78.56 MiB/s [2024-12-05T19:41:20.217Z] 1250.50 IOPS, 83.04 MiB/s [2024-12-05T19:41:21.630Z] 1229.00 IOPS, 81.61 MiB/s [2024-12-05T19:41:21.630Z] 1165.50 IOPS, 77.40 MiB/s 00:23:02.624 Latency(us) 00:23:02.624 [2024-12-05T19:41:21.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.624 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:23:02.624 ftl0 : 4.00 1165.23 77.38 0.00 0.00 904.24 184.32 86305.87 00:23:02.624 [2024-12-05T19:41:21.630Z] =================================================================================================================== 00:23:02.624 [2024-12-05T19:41:21.630Z] Total : 1165.23 77.38 0.00 0.00 904.24 184.32 86305.87 00:23:02.624 { 00:23:02.624 "results": [ 00:23:02.624 { 00:23:02.624 "job": "ftl0", 00:23:02.624 "core_mask": "0x1", 00:23:02.624 "workload": "randwrite", 00:23:02.624 "status": "finished", 00:23:02.624 "queue_depth": 1, 00:23:02.624 "io_size": 69632, 00:23:02.624 "runtime": 4.001787, 00:23:02.624 "iops": 1165.229433750472, 00:23:02.624 "mibps": 77.37851708499228, 00:23:02.624 "io_failed": 0, 00:23:02.624 "io_timeout": 0, 00:23:02.624 "avg_latency_us": 904.237406753658, 00:23:02.624 "min_latency_us": 184.32, 00:23:02.624 "max_latency_us": 86305.87076923077 00:23:02.624 } 00:23:02.624 ], 00:23:02.624 "core_count": 1 00:23:02.624 } 00:23:02.624 [2024-12-05 19:41:21.201395] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:02.624 19:41:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:23:02.624 [2024-12-05 19:41:21.308527] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:02.624 Running I/O for 4 seconds... 00:23:04.507 8271.00 IOPS, 32.31 MiB/s [2024-12-05T19:41:24.456Z] 7234.00 IOPS, 28.26 MiB/s [2024-12-05T19:41:25.398Z] 6459.33 IOPS, 25.23 MiB/s [2024-12-05T19:41:25.398Z] 5959.25 IOPS, 23.28 MiB/s 00:23:06.392 Latency(us) 00:23:06.392 [2024-12-05T19:41:25.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.392 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:23:06.392 ftl0 : 4.04 5936.46 23.19 0.00 0.00 21469.00 267.82 146800.64 00:23:06.392 [2024-12-05T19:41:25.398Z] =================================================================================================================== 00:23:06.392 [2024-12-05T19:41:25.398Z] Total : 5936.46 23.19 0.00 0.00 21469.00 0.00 146800.64 00:23:06.392 { 00:23:06.392 "results": [ 00:23:06.392 { 00:23:06.392 "job": "ftl0", 00:23:06.392 "core_mask": "0x1", 00:23:06.392 "workload": "randwrite", 00:23:06.392 "status": "finished", 00:23:06.392 "queue_depth": 128, 00:23:06.392 "io_size": 4096, 00:23:06.392 "runtime": 4.036919, 00:23:06.392 "iops": 5936.457976986905, 00:23:06.392 "mibps": 23.189288972605098, 00:23:06.392 "io_failed": 0, 00:23:06.392 "io_timeout": 0, 00:23:06.392 "avg_latency_us": 21469.00223749378, 00:23:06.392 "min_latency_us": 267.81538461538463, 00:23:06.392 "max_latency_us": 146800.64 00:23:06.392 } 00:23:06.392 ], 00:23:06.392 "core_count": 1 00:23:06.392 } 00:23:06.392 [2024-12-05 19:41:25.356262] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:06.392 19:41:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:23:06.654 [2024-12-05 19:41:25.477559] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:23:06.654 Running I/O for 4 seconds... 00:23:08.542 4009.00 IOPS, 15.66 MiB/s [2024-12-05T19:41:28.510Z] 3705.50 IOPS, 14.47 MiB/s [2024-12-05T19:41:29.895Z] 3824.00 IOPS, 14.94 MiB/s [2024-12-05T19:41:29.895Z] 3886.25 IOPS, 15.18 MiB/s 00:23:10.889 Latency(us) 00:23:10.889 [2024-12-05T19:41:29.895Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.889 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:10.889 Verification LBA range: start 0x0 length 0x1400000 00:23:10.889 ftl0 : 4.02 3899.29 15.23 0.00 0.00 32717.12 356.04 134701.69 00:23:10.889 [2024-12-05T19:41:29.895Z] =================================================================================================================== 00:23:10.889 [2024-12-05T19:41:29.895Z] Total : 3899.29 15.23 0.00 0.00 32717.12 0.00 134701.69 00:23:10.889 [2024-12-05 19:41:29.513746] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:23:10.889 { 00:23:10.889 "results": [ 00:23:10.889 { 00:23:10.889 "job": "ftl0", 00:23:10.889 "core_mask": "0x1", 00:23:10.889 "workload": "verify", 00:23:10.889 "status": "finished", 00:23:10.889 "verify_range": { 00:23:10.889 "start": 0, 00:23:10.889 "length": 20971520 00:23:10.889 }, 00:23:10.889 "queue_depth": 128, 00:23:10.889 "io_size": 4096, 00:23:10.889 "runtime": 4.018168, 00:23:10.889 "iops": 3899.2894274206556, 00:23:10.889 "mibps": 15.231599325861936, 00:23:10.889 "io_failed": 0, 00:23:10.889 "io_timeout": 0, 00:23:10.889 "avg_latency_us": 32717.12074782506, 00:23:10.889 "min_latency_us": 356.0369230769231, 00:23:10.889 "max_latency_us": 134701.68615384615 00:23:10.889 } 00:23:10.889 ], 00:23:10.889 "core_count": 1 00:23:10.889 } 00:23:10.889 19:41:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:23:10.889 [2024-12-05 19:41:29.732761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.889 [2024-12-05 19:41:29.732826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:10.889 [2024-12-05 19:41:29.732843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:10.889 [2024-12-05 19:41:29.732854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-12-05 19:41:29.732878] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:10.890 [2024-12-05 19:41:29.735907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.890 [2024-12-05 19:41:29.735946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:10.890 [2024-12-05 19:41:29.735962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.006 ms 00:23:10.890 [2024-12-05 19:41:29.735972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.890 [2024-12-05 19:41:29.738277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.890 [2024-12-05 19:41:29.738326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:10.890 [2024-12-05 19:41:29.738342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.272 ms 00:23:10.890 [2024-12-05 19:41:29.738351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.152 [2024-12-05 19:41:29.974389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.152 [2024-12-05 19:41:29.974493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:11.152 [2024-12-05 19:41:29.974519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 236.000 ms 00:23:11.152 [2024-12-05 19:41:29.974529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.152 [2024-12-05 19:41:29.980774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.152 [2024-12-05 19:41:29.980820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:11.152 [2024-12-05 19:41:29.980836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.191 ms 00:23:11.152 [2024-12-05 19:41:29.980848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.152 [2024-12-05 19:41:30.007934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.152 [2024-12-05 19:41:30.008201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:11.152 [2024-12-05 19:41:30.008229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.984 ms 00:23:11.152 [2024-12-05 19:41:30.008238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.152 [2024-12-05 19:41:30.026837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.152 [2024-12-05 19:41:30.026919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:11.152 [2024-12-05 19:41:30.026939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.149 ms 00:23:11.152 [2024-12-05 19:41:30.026950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.152 [2024-12-05 19:41:30.027153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.152 [2024-12-05 19:41:30.027167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:11.152 [2024-12-05 19:41:30.027182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:23:11.152 [2024-12-05 19:41:30.027190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.152 [2024-12-05 19:41:30.053440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.152 [2024-12-05 19:41:30.053501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:11.152 [2024-12-05 19:41:30.053520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.229 ms 00:23:11.152 [2024-12-05 19:41:30.053529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.152 [2024-12-05 19:41:30.079272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.152 [2024-12-05 19:41:30.079346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:11.152 [2024-12-05 19:41:30.079363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.689 ms 00:23:11.152 [2024-12-05 19:41:30.079372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.152 [2024-12-05 19:41:30.104340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.152 [2024-12-05 19:41:30.104397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:11.152 [2024-12-05 19:41:30.104414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.914 ms 00:23:11.152 [2024-12-05 19:41:30.104422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.152 [2024-12-05 19:41:30.129204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.152 [2024-12-05 19:41:30.129460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:11.152 [2024-12-05 19:41:30.129494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.668 ms 00:23:11.153 [2024-12-05 19:41:30.129502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.153 [2024-12-05 19:41:30.129637] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:11.153 [2024-12-05 19:41:30.129671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.129992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:11.153 [2024-12-05 19:41:30.130702] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:11.153 [2024-12-05 19:41:30.130713] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c9ed67bd-e6c0-48b3-87ed-7e0f8782c630 00:23:11.153 [2024-12-05 19:41:30.130725] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:11.153 [2024-12-05 19:41:30.130735] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:11.153 [2024-12-05 19:41:30.130743] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:11.153 [2024-12-05 19:41:30.130754] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:11.153 [2024-12-05 19:41:30.130761] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:11.153 [2024-12-05 19:41:30.130772] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:11.153 [2024-12-05 19:41:30.130779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:11.153 [2024-12-05 19:41:30.130791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:11.153 [2024-12-05 19:41:30.130798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:11.153 [2024-12-05 19:41:30.130809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.153 [2024-12-05 19:41:30.130817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:11.153 [2024-12-05 19:41:30.130828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:23:11.153 [2024-12-05 19:41:30.130852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.153 [2024-12-05 19:41:30.145068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.153 [2024-12-05 19:41:30.145261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.153 [2024-12-05 19:41:30.145332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.163 ms 00:23:11.153 [2024-12-05 19:41:30.145358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.153 [2024-12-05 19:41:30.145795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.153 [2024-12-05 19:41:30.145845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.153 [2024-12-05 19:41:30.145928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:23:11.153 [2024-12-05 19:41:30.145954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.184634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.184828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.414 [2024-12-05 19:41:30.184913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.184937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.185033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.185056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.414 [2024-12-05 19:41:30.185079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.185098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.185246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.185362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.414 [2024-12-05 19:41:30.185399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.185419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.185453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.185474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.414 [2024-12-05 19:41:30.185495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.185515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.270474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.270718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.414 [2024-12-05 19:41:30.270787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.270811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.339558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.339806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.414 [2024-12-05 19:41:30.339873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.339897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.340014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.340041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.414 [2024-12-05 19:41:30.340065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.340085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.340201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.340229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.414 [2024-12-05 19:41:30.340254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.340348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.340489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.340519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.414 [2024-12-05 19:41:30.340546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.340565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.340615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.340638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:11.414 [2024-12-05 19:41:30.340661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.340681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.340737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.340820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.414 [2024-12-05 19:41:30.340847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.340875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.340946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.414 [2024-12-05 19:41:30.341087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.414 [2024-12-05 19:41:30.341110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.414 [2024-12-05 19:41:30.341146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.414 [2024-12-05 19:41:30.341367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 608.556 ms, result 0 00:23:11.414 true 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 76161 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 76161 ']' 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 76161 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76161 00:23:11.414 killing process with pid 76161 00:23:11.414 Received shutdown signal, test time was about 4.000000 seconds 00:23:11.414 00:23:11.414 Latency(us) 00:23:11.414 [2024-12-05T19:41:30.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.414 [2024-12-05T19:41:30.420Z] =================================================================================================================== 00:23:11.414 [2024-12-05T19:41:30.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76161' 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 76161 00:23:11.414 19:41:30 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 76161 00:23:12.400 Remove shared memory files 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:23:12.400 ************************************ 00:23:12.400 END TEST ftl_bdevperf 00:23:12.400 ************************************ 00:23:12.400 00:23:12.400 real 0m20.916s 00:23:12.400 user 0m23.567s 00:23:12.400 sys 0m0.886s 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:12.400 19:41:31 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:12.400 19:41:31 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:12.400 19:41:31 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:12.400 19:41:31 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:12.400 19:41:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:12.400 ************************************ 00:23:12.400 START TEST ftl_trim 00:23:12.400 ************************************ 00:23:12.400 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:23:12.400 * Looking for test storage... 00:23:12.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:12.400 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:12.400 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:23:12.400 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:12.662 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:12.662 19:41:31 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:23:12.662 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:12.662 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.662 --rc genhtml_branch_coverage=1 00:23:12.662 --rc genhtml_function_coverage=1 00:23:12.662 --rc genhtml_legend=1 00:23:12.662 --rc geninfo_all_blocks=1 00:23:12.662 --rc geninfo_unexecuted_blocks=1 00:23:12.662 00:23:12.662 ' 00:23:12.662 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.662 --rc genhtml_branch_coverage=1 00:23:12.662 --rc genhtml_function_coverage=1 00:23:12.662 --rc genhtml_legend=1 00:23:12.662 --rc geninfo_all_blocks=1 00:23:12.662 --rc geninfo_unexecuted_blocks=1 00:23:12.662 00:23:12.662 ' 00:23:12.662 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.662 --rc genhtml_branch_coverage=1 00:23:12.662 --rc genhtml_function_coverage=1 00:23:12.662 --rc genhtml_legend=1 00:23:12.662 --rc geninfo_all_blocks=1 00:23:12.662 --rc geninfo_unexecuted_blocks=1 00:23:12.662 00:23:12.662 ' 00:23:12.662 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:12.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:12.662 --rc genhtml_branch_coverage=1 00:23:12.662 --rc genhtml_function_coverage=1 00:23:12.662 --rc genhtml_legend=1 00:23:12.662 --rc geninfo_all_blocks=1 00:23:12.662 --rc geninfo_unexecuted_blocks=1 00:23:12.662 00:23:12.662 ' 00:23:12.662 19:41:31 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:12.662 19:41:31 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:23:12.662 19:41:31 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:12.662 19:41:31 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:12.662 19:41:31 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:12.662 19:41:31 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:12.662 19:41:31 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:12.662 19:41:31 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:12.662 19:41:31 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=76497 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:23:12.663 19:41:31 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 76497 00:23:12.663 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76497 ']' 00:23:12.663 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.663 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:12.663 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.663 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:12.663 19:41:31 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:12.663 [2024-12-05 19:41:31.601712] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:23:12.663 [2024-12-05 19:41:31.602608] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76497 ] 00:23:12.923 [2024-12-05 19:41:31.770063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:12.923 [2024-12-05 19:41:31.911637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.923 [2024-12-05 19:41:31.912058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.923 [2024-12-05 19:41:31.912242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.891 19:41:32 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.891 19:41:32 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:13.891 19:41:32 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:13.891 19:41:32 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:23:13.891 19:41:32 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:13.891 19:41:32 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:23:13.891 19:41:32 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:23:13.891 19:41:32 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:14.151 19:41:33 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:14.151 19:41:33 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:23:14.151 19:41:33 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:14.151 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:14.152 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:14.152 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:14.152 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:14.152 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:14.413 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:14.413 { 00:23:14.413 "name": "nvme0n1", 00:23:14.413 "aliases": [ 00:23:14.413 "2a865a72-d770-4264-b73c-fc469301f52c" 00:23:14.413 ], 00:23:14.413 "product_name": "NVMe disk", 00:23:14.413 "block_size": 4096, 00:23:14.413 "num_blocks": 1310720, 00:23:14.413 "uuid": "2a865a72-d770-4264-b73c-fc469301f52c", 00:23:14.413 "numa_id": -1, 00:23:14.413 "assigned_rate_limits": { 00:23:14.413 "rw_ios_per_sec": 0, 00:23:14.413 "rw_mbytes_per_sec": 0, 00:23:14.413 "r_mbytes_per_sec": 0, 00:23:14.413 "w_mbytes_per_sec": 0 00:23:14.413 }, 00:23:14.413 "claimed": true, 00:23:14.413 "claim_type": "read_many_write_one", 00:23:14.413 "zoned": false, 00:23:14.413 "supported_io_types": { 00:23:14.413 "read": true, 00:23:14.413 "write": true, 00:23:14.413 "unmap": true, 00:23:14.413 "flush": true, 00:23:14.413 "reset": true, 00:23:14.413 "nvme_admin": true, 00:23:14.413 "nvme_io": true, 00:23:14.413 "nvme_io_md": false, 00:23:14.413 "write_zeroes": true, 00:23:14.413 "zcopy": false, 00:23:14.413 "get_zone_info": false, 00:23:14.413 "zone_management": false, 00:23:14.413 "zone_append": false, 00:23:14.413 "compare": true, 00:23:14.413 "compare_and_write": false, 00:23:14.413 "abort": true, 00:23:14.413 "seek_hole": false, 00:23:14.413 "seek_data": false, 00:23:14.413 "copy": true, 00:23:14.413 "nvme_iov_md": false 00:23:14.413 }, 00:23:14.413 "driver_specific": { 00:23:14.413 "nvme": [ 00:23:14.413 { 00:23:14.413 "pci_address": "0000:00:11.0", 00:23:14.413 "trid": { 00:23:14.413 "trtype": "PCIe", 00:23:14.413 "traddr": "0000:00:11.0" 00:23:14.413 }, 00:23:14.413 "ctrlr_data": { 00:23:14.413 "cntlid": 0, 00:23:14.413 "vendor_id": "0x1b36", 00:23:14.413 "model_number": "QEMU NVMe Ctrl", 00:23:14.413 "serial_number": "12341", 00:23:14.413 "firmware_revision": "8.0.0", 00:23:14.413 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:14.413 "oacs": { 00:23:14.413 "security": 0, 00:23:14.413 "format": 1, 00:23:14.413 "firmware": 0, 00:23:14.413 "ns_manage": 1 00:23:14.413 }, 00:23:14.413 "multi_ctrlr": false, 00:23:14.413 "ana_reporting": false 00:23:14.413 }, 00:23:14.413 "vs": { 00:23:14.413 "nvme_version": "1.4" 00:23:14.413 }, 00:23:14.413 "ns_data": { 00:23:14.413 "id": 1, 00:23:14.413 "can_share": false 00:23:14.413 } 00:23:14.413 } 00:23:14.413 ], 00:23:14.413 "mp_policy": "active_passive" 00:23:14.413 } 00:23:14.413 } 00:23:14.413 ]' 00:23:14.413 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:14.413 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:14.413 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:14.413 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:14.413 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:14.413 19:41:33 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:23:14.413 19:41:33 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:23:14.413 19:41:33 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:14.413 19:41:33 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:23:14.413 19:41:33 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:14.413 19:41:33 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:14.676 19:41:33 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=e291c6bc-328c-4d8b-abd4-e15bb702b2b9 00:23:14.676 19:41:33 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:23:14.676 19:41:33 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e291c6bc-328c-4d8b-abd4-e15bb702b2b9 00:23:14.938 19:41:33 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:15.200 19:41:34 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=d58e7cc7-f2e9-4033-a2e4-b0429903fb2c 00:23:15.200 19:41:34 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d58e7cc7-f2e9-4033-a2e4-b0429903fb2c 00:23:15.461 19:41:34 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:15.461 19:41:34 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:15.461 19:41:34 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:23:15.461 19:41:34 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:15.461 19:41:34 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:15.461 19:41:34 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:23:15.461 19:41:34 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:15.461 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:15.461 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:15.461 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:15.461 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:15.461 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:15.721 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:15.721 { 00:23:15.721 "name": "5daf087a-faab-4617-9aa9-ebef7bf92b63", 00:23:15.721 "aliases": [ 00:23:15.721 "lvs/nvme0n1p0" 00:23:15.721 ], 00:23:15.721 "product_name": "Logical Volume", 00:23:15.721 "block_size": 4096, 00:23:15.721 "num_blocks": 26476544, 00:23:15.721 "uuid": "5daf087a-faab-4617-9aa9-ebef7bf92b63", 00:23:15.721 "assigned_rate_limits": { 00:23:15.721 "rw_ios_per_sec": 0, 00:23:15.721 "rw_mbytes_per_sec": 0, 00:23:15.721 "r_mbytes_per_sec": 0, 00:23:15.721 "w_mbytes_per_sec": 0 00:23:15.721 }, 00:23:15.721 "claimed": false, 00:23:15.721 "zoned": false, 00:23:15.721 "supported_io_types": { 00:23:15.721 "read": true, 00:23:15.721 "write": true, 00:23:15.721 "unmap": true, 00:23:15.721 "flush": false, 00:23:15.722 "reset": true, 00:23:15.722 "nvme_admin": false, 00:23:15.722 "nvme_io": false, 00:23:15.722 "nvme_io_md": false, 00:23:15.722 "write_zeroes": true, 00:23:15.722 "zcopy": false, 00:23:15.722 "get_zone_info": false, 00:23:15.722 "zone_management": false, 00:23:15.722 "zone_append": false, 00:23:15.722 "compare": false, 00:23:15.722 "compare_and_write": false, 00:23:15.722 "abort": false, 00:23:15.722 "seek_hole": true, 00:23:15.722 "seek_data": true, 00:23:15.722 "copy": false, 00:23:15.722 "nvme_iov_md": false 00:23:15.722 }, 00:23:15.722 "driver_specific": { 00:23:15.722 "lvol": { 00:23:15.722 "lvol_store_uuid": "d58e7cc7-f2e9-4033-a2e4-b0429903fb2c", 00:23:15.722 "base_bdev": "nvme0n1", 00:23:15.722 "thin_provision": true, 00:23:15.722 "num_allocated_clusters": 0, 00:23:15.722 "snapshot": false, 00:23:15.722 "clone": false, 00:23:15.722 "esnap_clone": false 00:23:15.722 } 00:23:15.722 } 00:23:15.722 } 00:23:15.722 ]' 00:23:15.722 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:15.722 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:15.722 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:15.722 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:15.722 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:15.722 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:15.722 19:41:34 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:23:15.722 19:41:34 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:23:15.722 19:41:34 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:15.983 19:41:34 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:15.983 19:41:34 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:15.983 19:41:34 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:15.983 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:15.983 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:15.983 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:15.983 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:15.983 19:41:34 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:16.256 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:16.256 { 00:23:16.256 "name": "5daf087a-faab-4617-9aa9-ebef7bf92b63", 00:23:16.256 "aliases": [ 00:23:16.256 "lvs/nvme0n1p0" 00:23:16.256 ], 00:23:16.256 "product_name": "Logical Volume", 00:23:16.256 "block_size": 4096, 00:23:16.256 "num_blocks": 26476544, 00:23:16.256 "uuid": "5daf087a-faab-4617-9aa9-ebef7bf92b63", 00:23:16.256 "assigned_rate_limits": { 00:23:16.256 "rw_ios_per_sec": 0, 00:23:16.256 "rw_mbytes_per_sec": 0, 00:23:16.256 "r_mbytes_per_sec": 0, 00:23:16.256 "w_mbytes_per_sec": 0 00:23:16.256 }, 00:23:16.256 "claimed": false, 00:23:16.256 "zoned": false, 00:23:16.256 "supported_io_types": { 00:23:16.256 "read": true, 00:23:16.256 "write": true, 00:23:16.256 "unmap": true, 00:23:16.256 "flush": false, 00:23:16.256 "reset": true, 00:23:16.256 "nvme_admin": false, 00:23:16.256 "nvme_io": false, 00:23:16.256 "nvme_io_md": false, 00:23:16.256 "write_zeroes": true, 00:23:16.256 "zcopy": false, 00:23:16.256 "get_zone_info": false, 00:23:16.256 "zone_management": false, 00:23:16.256 "zone_append": false, 00:23:16.257 "compare": false, 00:23:16.257 "compare_and_write": false, 00:23:16.257 "abort": false, 00:23:16.257 "seek_hole": true, 00:23:16.257 "seek_data": true, 00:23:16.257 "copy": false, 00:23:16.257 "nvme_iov_md": false 00:23:16.257 }, 00:23:16.257 "driver_specific": { 00:23:16.257 "lvol": { 00:23:16.257 "lvol_store_uuid": "d58e7cc7-f2e9-4033-a2e4-b0429903fb2c", 00:23:16.257 "base_bdev": "nvme0n1", 00:23:16.257 "thin_provision": true, 00:23:16.257 "num_allocated_clusters": 0, 00:23:16.257 "snapshot": false, 00:23:16.257 "clone": false, 00:23:16.257 "esnap_clone": false 00:23:16.257 } 00:23:16.257 } 00:23:16.257 } 00:23:16.257 ]' 00:23:16.257 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:16.257 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:16.257 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:16.257 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:16.257 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:16.257 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:16.257 19:41:35 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:23:16.257 19:41:35 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:16.517 19:41:35 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:23:16.517 19:41:35 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:23:16.517 19:41:35 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:16.517 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:16.517 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:16.517 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:23:16.517 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:23:16.517 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5daf087a-faab-4617-9aa9-ebef7bf92b63 00:23:16.778 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:16.778 { 00:23:16.778 "name": "5daf087a-faab-4617-9aa9-ebef7bf92b63", 00:23:16.778 "aliases": [ 00:23:16.778 "lvs/nvme0n1p0" 00:23:16.778 ], 00:23:16.778 "product_name": "Logical Volume", 00:23:16.778 "block_size": 4096, 00:23:16.778 "num_blocks": 26476544, 00:23:16.778 "uuid": "5daf087a-faab-4617-9aa9-ebef7bf92b63", 00:23:16.778 "assigned_rate_limits": { 00:23:16.778 "rw_ios_per_sec": 0, 00:23:16.778 "rw_mbytes_per_sec": 0, 00:23:16.778 "r_mbytes_per_sec": 0, 00:23:16.778 "w_mbytes_per_sec": 0 00:23:16.778 }, 00:23:16.778 "claimed": false, 00:23:16.778 "zoned": false, 00:23:16.778 "supported_io_types": { 00:23:16.778 "read": true, 00:23:16.778 "write": true, 00:23:16.778 "unmap": true, 00:23:16.778 "flush": false, 00:23:16.778 "reset": true, 00:23:16.778 "nvme_admin": false, 00:23:16.778 "nvme_io": false, 00:23:16.778 "nvme_io_md": false, 00:23:16.778 "write_zeroes": true, 00:23:16.778 "zcopy": false, 00:23:16.778 "get_zone_info": false, 00:23:16.778 "zone_management": false, 00:23:16.778 "zone_append": false, 00:23:16.778 "compare": false, 00:23:16.778 "compare_and_write": false, 00:23:16.778 "abort": false, 00:23:16.778 "seek_hole": true, 00:23:16.778 "seek_data": true, 00:23:16.778 "copy": false, 00:23:16.778 "nvme_iov_md": false 00:23:16.778 }, 00:23:16.778 "driver_specific": { 00:23:16.778 "lvol": { 00:23:16.778 "lvol_store_uuid": "d58e7cc7-f2e9-4033-a2e4-b0429903fb2c", 00:23:16.778 "base_bdev": "nvme0n1", 00:23:16.778 "thin_provision": true, 00:23:16.778 "num_allocated_clusters": 0, 00:23:16.778 "snapshot": false, 00:23:16.778 "clone": false, 00:23:16.778 "esnap_clone": false 00:23:16.778 } 00:23:16.778 } 00:23:16.778 } 00:23:16.778 ]' 00:23:16.778 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:16.778 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:23:16.778 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:16.778 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:16.778 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:16.778 19:41:35 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:23:16.778 19:41:35 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:23:16.778 19:41:35 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 5daf087a-faab-4617-9aa9-ebef7bf92b63 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:23:17.041 [2024-12-05 19:41:35.902580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.902676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:17.041 [2024-12-05 19:41:35.902698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:17.041 [2024-12-05 19:41:35.902708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.906245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.906308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:17.041 [2024-12-05 19:41:35.906323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.499 ms 00:23:17.041 [2024-12-05 19:41:35.906331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.906512] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:17.041 [2024-12-05 19:41:35.907289] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:17.041 [2024-12-05 19:41:35.907481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.907496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:17.041 [2024-12-05 19:41:35.907508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:23:17.041 [2024-12-05 19:41:35.907516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.907646] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5b576645-d761-45ff-acc8-8625a1d5c445 00:23:17.041 [2024-12-05 19:41:35.909602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.909659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:17.041 [2024-12-05 19:41:35.909672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:17.041 [2024-12-05 19:41:35.909684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.920387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.920458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:17.041 [2024-12-05 19:41:35.920475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.602 ms 00:23:17.041 [2024-12-05 19:41:35.920486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.920703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.920720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:17.041 [2024-12-05 19:41:35.920729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:23:17.041 [2024-12-05 19:41:35.920743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.920790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.920802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:17.041 [2024-12-05 19:41:35.920811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:17.041 [2024-12-05 19:41:35.920823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.920874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:17.041 [2024-12-05 19:41:35.925723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.925780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:17.041 [2024-12-05 19:41:35.925797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.854 ms 00:23:17.041 [2024-12-05 19:41:35.925806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.925911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.925944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:17.041 [2024-12-05 19:41:35.925955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:17.041 [2024-12-05 19:41:35.925964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.925999] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:17.041 [2024-12-05 19:41:35.926220] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:17.041 [2024-12-05 19:41:35.926240] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:17.041 [2024-12-05 19:41:35.926253] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:17.041 [2024-12-05 19:41:35.926266] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:17.041 [2024-12-05 19:41:35.926275] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:17.041 [2024-12-05 19:41:35.926286] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:17.041 [2024-12-05 19:41:35.926295] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:17.041 [2024-12-05 19:41:35.926307] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:17.041 [2024-12-05 19:41:35.926316] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:17.041 [2024-12-05 19:41:35.926327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.926336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:17.041 [2024-12-05 19:41:35.926347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:23:17.041 [2024-12-05 19:41:35.926354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.926461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.041 [2024-12-05 19:41:35.926470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:17.041 [2024-12-05 19:41:35.926480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:17.041 [2024-12-05 19:41:35.926488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.041 [2024-12-05 19:41:35.926614] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:17.041 [2024-12-05 19:41:35.926625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:17.041 [2024-12-05 19:41:35.926635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:17.041 [2024-12-05 19:41:35.926644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.041 [2024-12-05 19:41:35.926653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:17.041 [2024-12-05 19:41:35.926662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:17.041 [2024-12-05 19:41:35.926671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:17.041 [2024-12-05 19:41:35.926678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:17.042 [2024-12-05 19:41:35.926687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:17.042 [2024-12-05 19:41:35.926704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:17.042 [2024-12-05 19:41:35.926711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:17.042 [2024-12-05 19:41:35.926721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:17.042 [2024-12-05 19:41:35.926728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:17.042 [2024-12-05 19:41:35.926737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:17.042 [2024-12-05 19:41:35.926744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:17.042 [2024-12-05 19:41:35.926761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:17.042 [2024-12-05 19:41:35.926770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:17.042 [2024-12-05 19:41:35.926786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.042 [2024-12-05 19:41:35.926803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:17.042 [2024-12-05 19:41:35.926812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.042 [2024-12-05 19:41:35.926828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:17.042 [2024-12-05 19:41:35.926837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.042 [2024-12-05 19:41:35.926853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:17.042 [2024-12-05 19:41:35.926860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:17.042 [2024-12-05 19:41:35.926876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:17.042 [2024-12-05 19:41:35.926887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:17.042 [2024-12-05 19:41:35.926903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:17.042 [2024-12-05 19:41:35.926909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:17.042 [2024-12-05 19:41:35.926918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:17.042 [2024-12-05 19:41:35.926925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:17.042 [2024-12-05 19:41:35.926935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:17.042 [2024-12-05 19:41:35.926941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:17.042 [2024-12-05 19:41:35.926958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:17.042 [2024-12-05 19:41:35.926967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.042 [2024-12-05 19:41:35.926974] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:17.042 [2024-12-05 19:41:35.926983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:17.042 [2024-12-05 19:41:35.926990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:17.042 [2024-12-05 19:41:35.927000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:17.042 [2024-12-05 19:41:35.927008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:17.042 [2024-12-05 19:41:35.927019] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:17.042 [2024-12-05 19:41:35.927026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:17.042 [2024-12-05 19:41:35.927034] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:17.042 [2024-12-05 19:41:35.927040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:17.042 [2024-12-05 19:41:35.927049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:17.042 [2024-12-05 19:41:35.927058] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:17.042 [2024-12-05 19:41:35.927069] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:17.042 [2024-12-05 19:41:35.927082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:17.042 [2024-12-05 19:41:35.927092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:17.042 [2024-12-05 19:41:35.927100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:17.042 [2024-12-05 19:41:35.927109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:17.042 [2024-12-05 19:41:35.927116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:17.042 [2024-12-05 19:41:35.927141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:17.042 [2024-12-05 19:41:35.927150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:17.042 [2024-12-05 19:41:35.927160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:17.042 [2024-12-05 19:41:35.927170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:17.042 [2024-12-05 19:41:35.927183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:17.042 [2024-12-05 19:41:35.927190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:17.042 [2024-12-05 19:41:35.927200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:17.042 [2024-12-05 19:41:35.927207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:17.042 [2024-12-05 19:41:35.927217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:17.042 [2024-12-05 19:41:35.927225] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:17.042 [2024-12-05 19:41:35.927239] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:17.042 [2024-12-05 19:41:35.927248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:17.042 [2024-12-05 19:41:35.927258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:17.042 [2024-12-05 19:41:35.927265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:17.042 [2024-12-05 19:41:35.927275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:17.042 [2024-12-05 19:41:35.927282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:17.042 [2024-12-05 19:41:35.927293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:17.042 [2024-12-05 19:41:35.927300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:23:17.042 [2024-12-05 19:41:35.927310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:17.042 [2024-12-05 19:41:35.927408] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:17.042 [2024-12-05 19:41:35.927423] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:21.255 [2024-12-05 19:41:39.916071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.255 [2024-12-05 19:41:39.916189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:21.256 [2024-12-05 19:41:39.916209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3988.644 ms 00:23:21.256 [2024-12-05 19:41:39.916223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:39.951337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:39.951423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.256 [2024-12-05 19:41:39.951442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.767 ms 00:23:21.256 [2024-12-05 19:41:39.951456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:39.951667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:39.951684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:21.256 [2024-12-05 19:41:39.951720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:21.256 [2024-12-05 19:41:39.951735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.001384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.001470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.256 [2024-12-05 19:41:40.001487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.584 ms 00:23:21.256 [2024-12-05 19:41:40.001500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.001684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.001701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.256 [2024-12-05 19:41:40.001711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:21.256 [2024-12-05 19:41:40.001722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.002427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.002462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.256 [2024-12-05 19:41:40.002474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:23:21.256 [2024-12-05 19:41:40.002485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.002670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.002684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.256 [2024-12-05 19:41:40.002717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:23:21.256 [2024-12-05 19:41:40.002732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.022525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.022595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.256 [2024-12-05 19:41:40.022609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.745 ms 00:23:21.256 [2024-12-05 19:41:40.022621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.036952] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:21.256 [2024-12-05 19:41:40.060666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.061025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:21.256 [2024-12-05 19:41:40.061057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.839 ms 00:23:21.256 [2024-12-05 19:41:40.061066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.183097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.183200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:21.256 [2024-12-05 19:41:40.183222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.851 ms 00:23:21.256 [2024-12-05 19:41:40.183231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.183552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.183568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:21.256 [2024-12-05 19:41:40.183737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.178 ms 00:23:21.256 [2024-12-05 19:41:40.183747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.212627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.212703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:21.256 [2024-12-05 19:41:40.212724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.824 ms 00:23:21.256 [2024-12-05 19:41:40.212733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.241436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.241502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:21.256 [2024-12-05 19:41:40.241521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.550 ms 00:23:21.256 [2024-12-05 19:41:40.241530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.256 [2024-12-05 19:41:40.242321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.256 [2024-12-05 19:41:40.242349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:21.256 [2024-12-05 19:41:40.242363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:23:21.256 [2024-12-05 19:41:40.242433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-12-05 19:41:40.339599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-12-05 19:41:40.339688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:21.517 [2024-12-05 19:41:40.339712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.083 ms 00:23:21.517 [2024-12-05 19:41:40.339722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-12-05 19:41:40.371158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-12-05 19:41:40.371243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:21.517 [2024-12-05 19:41:40.371263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.232 ms 00:23:21.517 [2024-12-05 19:41:40.371272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-12-05 19:41:40.402345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-12-05 19:41:40.402425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:21.517 [2024-12-05 19:41:40.402444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.910 ms 00:23:21.517 [2024-12-05 19:41:40.402452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-12-05 19:41:40.433761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-12-05 19:41:40.434111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:21.517 [2024-12-05 19:41:40.434167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.127 ms 00:23:21.517 [2024-12-05 19:41:40.434177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-12-05 19:41:40.434387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-12-05 19:41:40.434406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:21.517 [2024-12-05 19:41:40.434422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:21.517 [2024-12-05 19:41:40.434430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.517 [2024-12-05 19:41:40.434553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.517 [2024-12-05 19:41:40.434564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:21.517 [2024-12-05 19:41:40.434575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:21.518 [2024-12-05 19:41:40.434583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.518 [2024-12-05 19:41:40.435911] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:21.518 [2024-12-05 19:41:40.440308] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4532.969 ms, result 0 00:23:21.518 { 00:23:21.518 "name": "ftl0", 00:23:21.518 "uuid": "5b576645-d761-45ff-acc8-8625a1d5c445" 00:23:21.518 } 00:23:21.518 [2024-12-05 19:41:40.442618] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:21.518 19:41:40 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:23:21.518 19:41:40 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:23:21.518 19:41:40 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:23:21.518 19:41:40 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:23:21.518 19:41:40 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:23:21.518 19:41:40 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:23:21.518 19:41:40 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:23:21.777 19:41:40 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:23:22.037 [ 00:23:22.037 { 00:23:22.037 "name": "ftl0", 00:23:22.037 "aliases": [ 00:23:22.037 "5b576645-d761-45ff-acc8-8625a1d5c445" 00:23:22.037 ], 00:23:22.037 "product_name": "FTL disk", 00:23:22.037 "block_size": 4096, 00:23:22.037 "num_blocks": 23592960, 00:23:22.037 "uuid": "5b576645-d761-45ff-acc8-8625a1d5c445", 00:23:22.037 "assigned_rate_limits": { 00:23:22.037 "rw_ios_per_sec": 0, 00:23:22.037 "rw_mbytes_per_sec": 0, 00:23:22.037 "r_mbytes_per_sec": 0, 00:23:22.037 "w_mbytes_per_sec": 0 00:23:22.037 }, 00:23:22.037 "claimed": false, 00:23:22.037 "zoned": false, 00:23:22.037 "supported_io_types": { 00:23:22.037 "read": true, 00:23:22.037 "write": true, 00:23:22.037 "unmap": true, 00:23:22.037 "flush": true, 00:23:22.037 "reset": false, 00:23:22.037 "nvme_admin": false, 00:23:22.037 "nvme_io": false, 00:23:22.037 "nvme_io_md": false, 00:23:22.037 "write_zeroes": true, 00:23:22.037 "zcopy": false, 00:23:22.037 "get_zone_info": false, 00:23:22.037 "zone_management": false, 00:23:22.037 "zone_append": false, 00:23:22.037 "compare": false, 00:23:22.037 "compare_and_write": false, 00:23:22.037 "abort": false, 00:23:22.037 "seek_hole": false, 00:23:22.037 "seek_data": false, 00:23:22.037 "copy": false, 00:23:22.037 "nvme_iov_md": false 00:23:22.037 }, 00:23:22.037 "driver_specific": { 00:23:22.037 "ftl": { 00:23:22.037 "base_bdev": "5daf087a-faab-4617-9aa9-ebef7bf92b63", 00:23:22.037 "cache": "nvc0n1p0" 00:23:22.037 } 00:23:22.037 } 00:23:22.037 } 00:23:22.037 ] 00:23:22.037 19:41:40 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:23:22.037 19:41:40 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:23:22.037 19:41:40 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:22.299 19:41:41 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:23:22.299 19:41:41 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:23:22.559 19:41:41 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:23:22.559 { 00:23:22.559 "name": "ftl0", 00:23:22.559 "aliases": [ 00:23:22.559 "5b576645-d761-45ff-acc8-8625a1d5c445" 00:23:22.559 ], 00:23:22.559 "product_name": "FTL disk", 00:23:22.559 "block_size": 4096, 00:23:22.559 "num_blocks": 23592960, 00:23:22.559 "uuid": "5b576645-d761-45ff-acc8-8625a1d5c445", 00:23:22.559 "assigned_rate_limits": { 00:23:22.559 "rw_ios_per_sec": 0, 00:23:22.559 "rw_mbytes_per_sec": 0, 00:23:22.559 "r_mbytes_per_sec": 0, 00:23:22.559 "w_mbytes_per_sec": 0 00:23:22.559 }, 00:23:22.559 "claimed": false, 00:23:22.559 "zoned": false, 00:23:22.559 "supported_io_types": { 00:23:22.559 "read": true, 00:23:22.559 "write": true, 00:23:22.559 "unmap": true, 00:23:22.559 "flush": true, 00:23:22.559 "reset": false, 00:23:22.559 "nvme_admin": false, 00:23:22.559 "nvme_io": false, 00:23:22.559 "nvme_io_md": false, 00:23:22.559 "write_zeroes": true, 00:23:22.559 "zcopy": false, 00:23:22.559 "get_zone_info": false, 00:23:22.559 "zone_management": false, 00:23:22.559 "zone_append": false, 00:23:22.559 "compare": false, 00:23:22.559 "compare_and_write": false, 00:23:22.559 "abort": false, 00:23:22.559 "seek_hole": false, 00:23:22.559 "seek_data": false, 00:23:22.559 "copy": false, 00:23:22.559 "nvme_iov_md": false 00:23:22.559 }, 00:23:22.559 "driver_specific": { 00:23:22.559 "ftl": { 00:23:22.559 "base_bdev": "5daf087a-faab-4617-9aa9-ebef7bf92b63", 00:23:22.559 "cache": "nvc0n1p0" 00:23:22.559 } 00:23:22.559 } 00:23:22.559 } 00:23:22.559 ]' 00:23:22.559 19:41:41 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:23:22.559 19:41:41 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:23:22.559 19:41:41 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:22.821 [2024-12-05 19:41:41.745017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.821 [2024-12-05 19:41:41.745102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:22.821 [2024-12-05 19:41:41.745122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:22.821 [2024-12-05 19:41:41.745162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.821 [2024-12-05 19:41:41.745209] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:22.821 [2024-12-05 19:41:41.748218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.821 [2024-12-05 19:41:41.748274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:22.821 [2024-12-05 19:41:41.748294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.983 ms 00:23:22.821 [2024-12-05 19:41:41.748304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.821 [2024-12-05 19:41:41.748966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.821 [2024-12-05 19:41:41.748989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:22.821 [2024-12-05 19:41:41.749003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:23:22.821 [2024-12-05 19:41:41.749012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.821 [2024-12-05 19:41:41.752714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.821 [2024-12-05 19:41:41.752743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:22.821 [2024-12-05 19:41:41.752756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:23:22.821 [2024-12-05 19:41:41.752765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.821 [2024-12-05 19:41:41.760019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.821 [2024-12-05 19:41:41.760274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:22.821 [2024-12-05 19:41:41.760305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.192 ms 00:23:22.821 [2024-12-05 19:41:41.760315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.821 [2024-12-05 19:41:41.790140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.821 [2024-12-05 19:41:41.790224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:22.821 [2024-12-05 19:41:41.790247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.667 ms 00:23:22.821 [2024-12-05 19:41:41.790256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.821 [2024-12-05 19:41:41.810333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.821 [2024-12-05 19:41:41.810422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:22.821 [2024-12-05 19:41:41.810442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.932 ms 00:23:22.821 [2024-12-05 19:41:41.810456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.821 [2024-12-05 19:41:41.810768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.821 [2024-12-05 19:41:41.810782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:22.821 [2024-12-05 19:41:41.810794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:23:22.821 [2024-12-05 19:41:41.810803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.084 [2024-12-05 19:41:41.839444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.084 [2024-12-05 19:41:41.839729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:23.084 [2024-12-05 19:41:41.839761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.597 ms 00:23:23.084 [2024-12-05 19:41:41.839769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.084 [2024-12-05 19:41:41.867361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.084 [2024-12-05 19:41:41.867441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:23.084 [2024-12-05 19:41:41.867465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.333 ms 00:23:23.084 [2024-12-05 19:41:41.867475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.084 [2024-12-05 19:41:41.893865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.084 [2024-12-05 19:41:41.893940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:23.084 [2024-12-05 19:41:41.893960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.253 ms 00:23:23.084 [2024-12-05 19:41:41.893968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.084 [2024-12-05 19:41:41.921208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.084 [2024-12-05 19:41:41.921493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:23.084 [2024-12-05 19:41:41.921525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.999 ms 00:23:23.084 [2024-12-05 19:41:41.921534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.084 [2024-12-05 19:41:41.921660] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:23.084 [2024-12-05 19:41:41.921680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:23.084 [2024-12-05 19:41:41.921991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:23.085 [2024-12-05 19:41:41.922700] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:23.085 [2024-12-05 19:41:41.922712] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b576645-d761-45ff-acc8-8625a1d5c445 00:23:23.085 [2024-12-05 19:41:41.922721] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:23.085 [2024-12-05 19:41:41.922737] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:23.085 [2024-12-05 19:41:41.922745] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:23.085 [2024-12-05 19:41:41.922758] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:23.085 [2024-12-05 19:41:41.922766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:23.085 [2024-12-05 19:41:41.922777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:23.085 [2024-12-05 19:41:41.922784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:23.085 [2024-12-05 19:41:41.922794] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:23.085 [2024-12-05 19:41:41.922801] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:23.085 [2024-12-05 19:41:41.922811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.085 [2024-12-05 19:41:41.922819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:23.085 [2024-12-05 19:41:41.922831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:23:23.085 [2024-12-05 19:41:41.922839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.085 [2024-12-05 19:41:41.937064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.085 [2024-12-05 19:41:41.937317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:23.085 [2024-12-05 19:41:41.937348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.170 ms 00:23:23.085 [2024-12-05 19:41:41.937359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.085 [2024-12-05 19:41:41.937859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:23.085 [2024-12-05 19:41:41.937873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:23.085 [2024-12-05 19:41:41.937885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.375 ms 00:23:23.085 [2024-12-05 19:41:41.937894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.085 [2024-12-05 19:41:41.986784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.085 [2024-12-05 19:41:41.986864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:23.085 [2024-12-05 19:41:41.986880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.086 [2024-12-05 19:41:41.986889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.086 [2024-12-05 19:41:41.987058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.086 [2024-12-05 19:41:41.987069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:23.086 [2024-12-05 19:41:41.987080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.086 [2024-12-05 19:41:41.987088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.086 [2024-12-05 19:41:41.987201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.086 [2024-12-05 19:41:41.987212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:23.086 [2024-12-05 19:41:41.987229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.086 [2024-12-05 19:41:41.987237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.086 [2024-12-05 19:41:41.987274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.086 [2024-12-05 19:41:41.987284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:23.086 [2024-12-05 19:41:41.987295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.086 [2024-12-05 19:41:41.987303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.086 [2024-12-05 19:41:42.079111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.086 [2024-12-05 19:41:42.079206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:23.086 [2024-12-05 19:41:42.079225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.086 [2024-12-05 19:41:42.079235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.347 [2024-12-05 19:41:42.151546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.347 [2024-12-05 19:41:42.151626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:23.347 [2024-12-05 19:41:42.151643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.347 [2024-12-05 19:41:42.151652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.347 [2024-12-05 19:41:42.151785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.347 [2024-12-05 19:41:42.151796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:23.347 [2024-12-05 19:41:42.151810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.347 [2024-12-05 19:41:42.151823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.347 [2024-12-05 19:41:42.151884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.347 [2024-12-05 19:41:42.151893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:23.347 [2024-12-05 19:41:42.151903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.347 [2024-12-05 19:41:42.151912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.347 [2024-12-05 19:41:42.152040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.347 [2024-12-05 19:41:42.152050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:23.347 [2024-12-05 19:41:42.152061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.347 [2024-12-05 19:41:42.152072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.347 [2024-12-05 19:41:42.152167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.347 [2024-12-05 19:41:42.152179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:23.347 [2024-12-05 19:41:42.152190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.347 [2024-12-05 19:41:42.152198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.347 [2024-12-05 19:41:42.152263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.347 [2024-12-05 19:41:42.152273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:23.347 [2024-12-05 19:41:42.152287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.347 [2024-12-05 19:41:42.152295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.347 [2024-12-05 19:41:42.152369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:23.347 [2024-12-05 19:41:42.152380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:23.347 [2024-12-05 19:41:42.152412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:23.347 [2024-12-05 19:41:42.152420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:23.347 [2024-12-05 19:41:42.152655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 407.623 ms, result 0 00:23:23.347 true 00:23:23.347 19:41:42 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 76497 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76497 ']' 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76497 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76497 00:23:23.347 killing process with pid 76497 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76497' 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76497 00:23:23.347 19:41:42 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76497 00:23:33.343 19:41:51 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:23:33.604 65536+0 records in 00:23:33.604 65536+0 records out 00:23:33.604 268435456 bytes (268 MB, 256 MiB) copied, 1.1015 s, 244 MB/s 00:23:33.604 19:41:52 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:33.604 [2024-12-05 19:41:52.560706] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:23:33.604 [2024-12-05 19:41:52.560859] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76702 ] 00:23:33.866 [2024-12-05 19:41:52.725790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.866 [2024-12-05 19:41:52.862437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:34.446 [2024-12-05 19:41:53.168816] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.446 [2024-12-05 19:41:53.168915] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:34.446 [2024-12-05 19:41:53.334168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.334246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:34.446 [2024-12-05 19:41:53.334262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.446 [2024-12-05 19:41:53.334271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.337795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.337869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:34.446 [2024-12-05 19:41:53.337883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.497 ms 00:23:34.446 [2024-12-05 19:41:53.337893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.338088] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:34.446 [2024-12-05 19:41:53.338920] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:34.446 [2024-12-05 19:41:53.338948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.338958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:34.446 [2024-12-05 19:41:53.338968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:23:34.446 [2024-12-05 19:41:53.338977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.340886] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:34.446 [2024-12-05 19:41:53.354832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.354894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:34.446 [2024-12-05 19:41:53.354909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.948 ms 00:23:34.446 [2024-12-05 19:41:53.354919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.355072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.355086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:34.446 [2024-12-05 19:41:53.355097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:34.446 [2024-12-05 19:41:53.355105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.363923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.363979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:34.446 [2024-12-05 19:41:53.363992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.740 ms 00:23:34.446 [2024-12-05 19:41:53.364000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.364160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.364174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:34.446 [2024-12-05 19:41:53.364185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:34.446 [2024-12-05 19:41:53.364193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.364233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.364242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:34.446 [2024-12-05 19:41:53.364251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:34.446 [2024-12-05 19:41:53.364259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.364285] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:34.446 [2024-12-05 19:41:53.368547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.368589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:34.446 [2024-12-05 19:41:53.368601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.270 ms 00:23:34.446 [2024-12-05 19:41:53.368610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.368702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.368714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:34.446 [2024-12-05 19:41:53.368724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:23:34.446 [2024-12-05 19:41:53.368733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.368759] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:34.446 [2024-12-05 19:41:53.368784] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:34.446 [2024-12-05 19:41:53.368821] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:34.446 [2024-12-05 19:41:53.368838] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:34.446 [2024-12-05 19:41:53.368950] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:34.446 [2024-12-05 19:41:53.368961] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:34.446 [2024-12-05 19:41:53.368974] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:34.446 [2024-12-05 19:41:53.368987] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:34.446 [2024-12-05 19:41:53.368997] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:34.446 [2024-12-05 19:41:53.369006] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:34.446 [2024-12-05 19:41:53.369014] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:34.446 [2024-12-05 19:41:53.369022] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:34.446 [2024-12-05 19:41:53.369030] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:34.446 [2024-12-05 19:41:53.369039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.446 [2024-12-05 19:41:53.369047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:34.446 [2024-12-05 19:41:53.369055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:23:34.446 [2024-12-05 19:41:53.369062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.446 [2024-12-05 19:41:53.369178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.447 [2024-12-05 19:41:53.369192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:34.447 [2024-12-05 19:41:53.369200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:34.447 [2024-12-05 19:41:53.369209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.447 [2024-12-05 19:41:53.369323] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:34.447 [2024-12-05 19:41:53.369334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:34.447 [2024-12-05 19:41:53.369342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.447 [2024-12-05 19:41:53.369350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:34.447 [2024-12-05 19:41:53.369366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:34.447 [2024-12-05 19:41:53.369382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:34.447 [2024-12-05 19:41:53.369389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.447 [2024-12-05 19:41:53.369404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:34.447 [2024-12-05 19:41:53.369419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:34.447 [2024-12-05 19:41:53.369425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:34.447 [2024-12-05 19:41:53.369432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:34.447 [2024-12-05 19:41:53.369446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:34.447 [2024-12-05 19:41:53.369457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:34.447 [2024-12-05 19:41:53.369472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:34.447 [2024-12-05 19:41:53.369479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:34.447 [2024-12-05 19:41:53.369502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.447 [2024-12-05 19:41:53.369517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:34.447 [2024-12-05 19:41:53.369524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.447 [2024-12-05 19:41:53.369537] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:34.447 [2024-12-05 19:41:53.369544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.447 [2024-12-05 19:41:53.369558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:34.447 [2024-12-05 19:41:53.369565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:34.447 [2024-12-05 19:41:53.369579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:34.447 [2024-12-05 19:41:53.369585] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.447 [2024-12-05 19:41:53.369598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:34.447 [2024-12-05 19:41:53.369605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:34.447 [2024-12-05 19:41:53.369611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:34.447 [2024-12-05 19:41:53.369618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:34.447 [2024-12-05 19:41:53.369624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:34.447 [2024-12-05 19:41:53.369631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:34.447 [2024-12-05 19:41:53.369643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:34.447 [2024-12-05 19:41:53.369649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369655] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:34.447 [2024-12-05 19:41:53.369664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:34.447 [2024-12-05 19:41:53.369673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:34.447 [2024-12-05 19:41:53.369683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:34.447 [2024-12-05 19:41:53.369693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:34.447 [2024-12-05 19:41:53.369701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:34.447 [2024-12-05 19:41:53.369707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:34.447 [2024-12-05 19:41:53.369714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:34.447 [2024-12-05 19:41:53.369721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:34.447 [2024-12-05 19:41:53.369729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:34.447 [2024-12-05 19:41:53.369738] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:34.447 [2024-12-05 19:41:53.369747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.447 [2024-12-05 19:41:53.369756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:34.447 [2024-12-05 19:41:53.369764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:34.447 [2024-12-05 19:41:53.369771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:34.447 [2024-12-05 19:41:53.369779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:34.447 [2024-12-05 19:41:53.369786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:34.447 [2024-12-05 19:41:53.369793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:34.447 [2024-12-05 19:41:53.369801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:34.447 [2024-12-05 19:41:53.369808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:34.447 [2024-12-05 19:41:53.369815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:34.447 [2024-12-05 19:41:53.369822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:34.447 [2024-12-05 19:41:53.369829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:34.447 [2024-12-05 19:41:53.369837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:34.447 [2024-12-05 19:41:53.369844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:34.447 [2024-12-05 19:41:53.369851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:34.447 [2024-12-05 19:41:53.369857] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:34.447 [2024-12-05 19:41:53.369866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:34.447 [2024-12-05 19:41:53.369874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:34.447 [2024-12-05 19:41:53.369881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:34.447 [2024-12-05 19:41:53.369889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:34.447 [2024-12-05 19:41:53.369896] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:34.448 [2024-12-05 19:41:53.369904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.448 [2024-12-05 19:41:53.369915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:34.448 [2024-12-05 19:41:53.369923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:23:34.448 [2024-12-05 19:41:53.369932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.448 [2024-12-05 19:41:53.402940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.448 [2024-12-05 19:41:53.403295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:34.448 [2024-12-05 19:41:53.403320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.940 ms 00:23:34.448 [2024-12-05 19:41:53.403329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.448 [2024-12-05 19:41:53.403524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.448 [2024-12-05 19:41:53.403536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:34.448 [2024-12-05 19:41:53.403546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:34.448 [2024-12-05 19:41:53.403555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.448 [2024-12-05 19:41:53.447629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.448 [2024-12-05 19:41:53.447707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:34.448 [2024-12-05 19:41:53.447727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.047 ms 00:23:34.448 [2024-12-05 19:41:53.447737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.448 [2024-12-05 19:41:53.447910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.448 [2024-12-05 19:41:53.447924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:34.448 [2024-12-05 19:41:53.447934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:34.448 [2024-12-05 19:41:53.447943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.448 [2024-12-05 19:41:53.448564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.448 [2024-12-05 19:41:53.448588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:34.448 [2024-12-05 19:41:53.448610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.594 ms 00:23:34.448 [2024-12-05 19:41:53.448619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.448 [2024-12-05 19:41:53.448786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.448 [2024-12-05 19:41:53.448798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:34.448 [2024-12-05 19:41:53.448807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:23:34.448 [2024-12-05 19:41:53.448817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.709 [2024-12-05 19:41:53.465264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.465324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:34.710 [2024-12-05 19:41:53.465338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.421 ms 00:23:34.710 [2024-12-05 19:41:53.465348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.480021] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:34.710 [2024-12-05 19:41:53.480093] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:34.710 [2024-12-05 19:41:53.480110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.480119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:34.710 [2024-12-05 19:41:53.480160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.599 ms 00:23:34.710 [2024-12-05 19:41:53.480169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.506959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.507363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:34.710 [2024-12-05 19:41:53.507393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.645 ms 00:23:34.710 [2024-12-05 19:41:53.507403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.522865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.523232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:34.710 [2024-12-05 19:41:53.523261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.826 ms 00:23:34.710 [2024-12-05 19:41:53.523270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.537320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.537410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:34.710 [2024-12-05 19:41:53.537428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.912 ms 00:23:34.710 [2024-12-05 19:41:53.537437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.538213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.538242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:34.710 [2024-12-05 19:41:53.538254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:23:34.710 [2024-12-05 19:41:53.538263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.607984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.608054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:34.710 [2024-12-05 19:41:53.608071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.690 ms 00:23:34.710 [2024-12-05 19:41:53.608082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.621428] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:34.710 [2024-12-05 19:41:53.643087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.643383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:34.710 [2024-12-05 19:41:53.643410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.826 ms 00:23:34.710 [2024-12-05 19:41:53.643419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.643568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.643581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:34.710 [2024-12-05 19:41:53.643592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:34.710 [2024-12-05 19:41:53.643601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.643662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.643672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:34.710 [2024-12-05 19:41:53.643682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:34.710 [2024-12-05 19:41:53.643690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.643724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.643736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:34.710 [2024-12-05 19:41:53.643744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:34.710 [2024-12-05 19:41:53.643752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.643791] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:34.710 [2024-12-05 19:41:53.643802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.643810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:34.710 [2024-12-05 19:41:53.643819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:34.710 [2024-12-05 19:41:53.643827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.671936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.672014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:34.710 [2024-12-05 19:41:53.672031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.085 ms 00:23:34.710 [2024-12-05 19:41:53.672041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.672244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:34.710 [2024-12-05 19:41:53.672259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:34.710 [2024-12-05 19:41:53.672269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:23:34.710 [2024-12-05 19:41:53.672278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:34.710 [2024-12-05 19:41:53.673379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:34.710 [2024-12-05 19:41:53.677357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 338.879 ms, result 0 00:23:34.710 [2024-12-05 19:41:53.678620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:34.710 [2024-12-05 19:41:53.692721] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.100  [2024-12-05T19:41:56.048Z] Copying: 13/256 [MB] (13 MBps) [2024-12-05T19:41:57.044Z] Copying: 24/256 [MB] (11 MBps) [2024-12-05T19:41:57.992Z] Copying: 35352/262144 [kB] (9956 kBps) [2024-12-05T19:41:58.935Z] Copying: 45/256 [MB] (11 MBps) [2024-12-05T19:41:59.939Z] Copying: 60/256 [MB] (14 MBps) [2024-12-05T19:42:00.883Z] Copying: 71/256 [MB] (11 MBps) [2024-12-05T19:42:01.827Z] Copying: 88/256 [MB] (17 MBps) [2024-12-05T19:42:02.771Z] Copying: 116/256 [MB] (27 MBps) [2024-12-05T19:42:03.713Z] Copying: 150/256 [MB] (34 MBps) [2024-12-05T19:42:05.125Z] Copying: 191/256 [MB] (40 MBps) [2024-12-05T19:42:05.385Z] Copying: 228/256 [MB] (37 MBps) [2024-12-05T19:42:05.385Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-05 19:42:05.360275] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:46.379 [2024-12-05 19:42:05.369598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-05 19:42:05.369649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:46.379 [2024-12-05 19:42:05.369663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:46.379 [2024-12-05 19:42:05.369681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-05 19:42:05.369706] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:46.379 [2024-12-05 19:42:05.372328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-05 19:42:05.372365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:46.379 [2024-12-05 19:42:05.372376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:23:46.379 [2024-12-05 19:42:05.372385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-05 19:42:05.373592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-05 19:42:05.373626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:46.379 [2024-12-05 19:42:05.373636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.181 ms 00:23:46.379 [2024-12-05 19:42:05.373643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.379 [2024-12-05 19:42:05.380155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.379 [2024-12-05 19:42:05.380198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:46.379 [2024-12-05 19:42:05.380208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.494 ms 00:23:46.379 [2024-12-05 19:42:05.380215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.638 [2024-12-05 19:42:05.387205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.638 [2024-12-05 19:42:05.387409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:46.638 [2024-12-05 19:42:05.387426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.934 ms 00:23:46.638 [2024-12-05 19:42:05.387434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.638 [2024-12-05 19:42:05.411159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.638 [2024-12-05 19:42:05.411209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:46.638 [2024-12-05 19:42:05.411221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.668 ms 00:23:46.638 [2024-12-05 19:42:05.411229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.638 [2024-12-05 19:42:05.425561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.638 [2024-12-05 19:42:05.425613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:46.638 [2024-12-05 19:42:05.425630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.277 ms 00:23:46.638 [2024-12-05 19:42:05.425637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.638 [2024-12-05 19:42:05.425792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.638 [2024-12-05 19:42:05.425803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:46.638 [2024-12-05 19:42:05.425811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:23:46.638 [2024-12-05 19:42:05.425826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.638 [2024-12-05 19:42:05.449528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.638 [2024-12-05 19:42:05.449578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:46.638 [2024-12-05 19:42:05.449590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.685 ms 00:23:46.638 [2024-12-05 19:42:05.449598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.638 [2024-12-05 19:42:05.473410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.638 [2024-12-05 19:42:05.473454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:46.638 [2024-12-05 19:42:05.473466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.755 ms 00:23:46.638 [2024-12-05 19:42:05.473473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.639 [2024-12-05 19:42:05.496423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.639 [2024-12-05 19:42:05.496480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:46.639 [2024-12-05 19:42:05.496492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.873 ms 00:23:46.639 [2024-12-05 19:42:05.496499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.639 [2024-12-05 19:42:05.519093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.639 [2024-12-05 19:42:05.519284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:46.639 [2024-12-05 19:42:05.519303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.512 ms 00:23:46.639 [2024-12-05 19:42:05.519312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.639 [2024-12-05 19:42:05.519355] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:46.639 [2024-12-05 19:42:05.519370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:46.639 [2024-12-05 19:42:05.519966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.519974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.519981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.519988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.519996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:46.640 [2024-12-05 19:42:05.520149] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:46.640 [2024-12-05 19:42:05.520157] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b576645-d761-45ff-acc8-8625a1d5c445 00:23:46.640 [2024-12-05 19:42:05.520165] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:46.640 [2024-12-05 19:42:05.520172] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:46.640 [2024-12-05 19:42:05.520179] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:46.640 [2024-12-05 19:42:05.520187] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:46.640 [2024-12-05 19:42:05.520193] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:46.640 [2024-12-05 19:42:05.520201] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:46.640 [2024-12-05 19:42:05.520208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:46.640 [2024-12-05 19:42:05.520215] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:46.640 [2024-12-05 19:42:05.520221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:46.640 [2024-12-05 19:42:05.520228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.640 [2024-12-05 19:42:05.520238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:46.640 [2024-12-05 19:42:05.520246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.874 ms 00:23:46.640 [2024-12-05 19:42:05.520253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.640 [2024-12-05 19:42:05.533169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.640 [2024-12-05 19:42:05.533302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:46.640 [2024-12-05 19:42:05.533357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.896 ms 00:23:46.640 [2024-12-05 19:42:05.533380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.640 [2024-12-05 19:42:05.533835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.640 [2024-12-05 19:42:05.533914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:46.640 [2024-12-05 19:42:05.533976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:23:46.640 [2024-12-05 19:42:05.534008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.640 [2024-12-05 19:42:05.569049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.640 [2024-12-05 19:42:05.569248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:46.640 [2024-12-05 19:42:05.569300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.640 [2024-12-05 19:42:05.569322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.640 [2024-12-05 19:42:05.569435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.640 [2024-12-05 19:42:05.569494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:46.640 [2024-12-05 19:42:05.569621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.640 [2024-12-05 19:42:05.569644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.640 [2024-12-05 19:42:05.569713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.640 [2024-12-05 19:42:05.569736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:46.640 [2024-12-05 19:42:05.569756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.640 [2024-12-05 19:42:05.569774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.640 [2024-12-05 19:42:05.569850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.640 [2024-12-05 19:42:05.569878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:46.640 [2024-12-05 19:42:05.569897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.640 [2024-12-05 19:42:05.569915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.899 [2024-12-05 19:42:05.646513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.899 [2024-12-05 19:42:05.646705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:46.899 [2024-12-05 19:42:05.646758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.899 [2024-12-05 19:42:05.646780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.899 [2024-12-05 19:42:05.709963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.899 [2024-12-05 19:42:05.710177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:46.899 [2024-12-05 19:42:05.710233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.899 [2024-12-05 19:42:05.710275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.899 [2024-12-05 19:42:05.710348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.899 [2024-12-05 19:42:05.710440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:46.899 [2024-12-05 19:42:05.710465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.899 [2024-12-05 19:42:05.710483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.899 [2024-12-05 19:42:05.710524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.899 [2024-12-05 19:42:05.710577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:46.899 [2024-12-05 19:42:05.710607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.899 [2024-12-05 19:42:05.710627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.899 [2024-12-05 19:42:05.710732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.899 [2024-12-05 19:42:05.710840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:46.899 [2024-12-05 19:42:05.710859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.899 [2024-12-05 19:42:05.710878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.899 [2024-12-05 19:42:05.710970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.899 [2024-12-05 19:42:05.710995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:46.899 [2024-12-05 19:42:05.711015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.899 [2024-12-05 19:42:05.711038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.899 [2024-12-05 19:42:05.711084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.899 [2024-12-05 19:42:05.711106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:46.899 [2024-12-05 19:42:05.711194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.899 [2024-12-05 19:42:05.711219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.899 [2024-12-05 19:42:05.711272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.899 [2024-12-05 19:42:05.711296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:46.899 [2024-12-05 19:42:05.711336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.899 [2024-12-05 19:42:05.711354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.899 [2024-12-05 19:42:05.711544] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.970 ms, result 0 00:23:47.837 00:23:47.837 00:23:47.837 19:42:06 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=76850 00:23:47.837 19:42:06 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:47.837 19:42:06 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 76850 00:23:47.837 19:42:06 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 76850 ']' 00:23:47.837 19:42:06 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.837 19:42:06 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.837 19:42:06 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.837 19:42:06 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.837 19:42:06 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:47.837 [2024-12-05 19:42:06.654371] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:23:47.837 [2024-12-05 19:42:06.654500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76850 ] 00:23:47.837 [2024-12-05 19:42:06.812323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.095 [2024-12-05 19:42:06.922050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.663 19:42:07 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.663 19:42:07 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:48.663 19:42:07 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:48.922 [2024-12-05 19:42:07.778553] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:48.922 [2024-12-05 19:42:07.778620] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:49.185 [2024-12-05 19:42:07.949334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.949389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:49.185 [2024-12-05 19:42:07.949404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:49.185 [2024-12-05 19:42:07.949413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.952139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.952174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:49.185 [2024-12-05 19:42:07.952186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.695 ms 00:23:49.185 [2024-12-05 19:42:07.952194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.952326] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:49.185 [2024-12-05 19:42:07.953052] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:49.185 [2024-12-05 19:42:07.953081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.953090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:49.185 [2024-12-05 19:42:07.953100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.772 ms 00:23:49.185 [2024-12-05 19:42:07.953107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.954254] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:49.185 [2024-12-05 19:42:07.966391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.966439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:49.185 [2024-12-05 19:42:07.966453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.139 ms 00:23:49.185 [2024-12-05 19:42:07.966463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.966560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.966572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:49.185 [2024-12-05 19:42:07.966580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:49.185 [2024-12-05 19:42:07.966589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.971714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.971882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:49.185 [2024-12-05 19:42:07.971898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.076 ms 00:23:49.185 [2024-12-05 19:42:07.971908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.972024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.972036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:49.185 [2024-12-05 19:42:07.972045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:49.185 [2024-12-05 19:42:07.972058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.972084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.972094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:49.185 [2024-12-05 19:42:07.972101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:49.185 [2024-12-05 19:42:07.972111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.972151] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:49.185 [2024-12-05 19:42:07.975528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.975557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:49.185 [2024-12-05 19:42:07.975568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.381 ms 00:23:49.185 [2024-12-05 19:42:07.975576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.975614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.975623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:49.185 [2024-12-05 19:42:07.975633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:49.185 [2024-12-05 19:42:07.975641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.975663] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:49.185 [2024-12-05 19:42:07.975681] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:49.185 [2024-12-05 19:42:07.975724] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:49.185 [2024-12-05 19:42:07.975739] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:49.185 [2024-12-05 19:42:07.975842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:49.185 [2024-12-05 19:42:07.975852] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:49.185 [2024-12-05 19:42:07.975865] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:49.185 [2024-12-05 19:42:07.975874] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:49.185 [2024-12-05 19:42:07.975884] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:49.185 [2024-12-05 19:42:07.975893] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:49.185 [2024-12-05 19:42:07.975902] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:49.185 [2024-12-05 19:42:07.975909] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:49.185 [2024-12-05 19:42:07.975919] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:49.185 [2024-12-05 19:42:07.975926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.975935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:49.185 [2024-12-05 19:42:07.975943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:23:49.185 [2024-12-05 19:42:07.975951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.976052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.185 [2024-12-05 19:42:07.976063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:49.185 [2024-12-05 19:42:07.976070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:49.185 [2024-12-05 19:42:07.976079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.185 [2024-12-05 19:42:07.976197] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:49.185 [2024-12-05 19:42:07.976209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:49.185 [2024-12-05 19:42:07.976218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:49.185 [2024-12-05 19:42:07.976227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:49.186 [2024-12-05 19:42:07.976246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:49.186 [2024-12-05 19:42:07.976263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:49.186 [2024-12-05 19:42:07.976269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:49.186 [2024-12-05 19:42:07.976284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:49.186 [2024-12-05 19:42:07.976292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:49.186 [2024-12-05 19:42:07.976298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:49.186 [2024-12-05 19:42:07.976306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:49.186 [2024-12-05 19:42:07.976313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:49.186 [2024-12-05 19:42:07.976323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:49.186 [2024-12-05 19:42:07.976338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:49.186 [2024-12-05 19:42:07.976349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:49.186 [2024-12-05 19:42:07.976364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.186 [2024-12-05 19:42:07.976378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:49.186 [2024-12-05 19:42:07.976387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.186 [2024-12-05 19:42:07.976401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:49.186 [2024-12-05 19:42:07.976408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.186 [2024-12-05 19:42:07.976422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:49.186 [2024-12-05 19:42:07.976431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:49.186 [2024-12-05 19:42:07.976445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:49.186 [2024-12-05 19:42:07.976452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:49.186 [2024-12-05 19:42:07.976466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:49.186 [2024-12-05 19:42:07.976474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:49.186 [2024-12-05 19:42:07.976481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:49.186 [2024-12-05 19:42:07.976488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:49.186 [2024-12-05 19:42:07.976495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:49.186 [2024-12-05 19:42:07.976504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:49.186 [2024-12-05 19:42:07.976517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:49.186 [2024-12-05 19:42:07.976524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976531] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:49.186 [2024-12-05 19:42:07.976541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:49.186 [2024-12-05 19:42:07.976549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:49.186 [2024-12-05 19:42:07.976556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:49.186 [2024-12-05 19:42:07.976565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:49.186 [2024-12-05 19:42:07.976572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:49.186 [2024-12-05 19:42:07.976580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:49.186 [2024-12-05 19:42:07.976586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:49.186 [2024-12-05 19:42:07.976594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:49.186 [2024-12-05 19:42:07.976600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:49.186 [2024-12-05 19:42:07.976610] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:49.186 [2024-12-05 19:42:07.976619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:49.186 [2024-12-05 19:42:07.976631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:49.186 [2024-12-05 19:42:07.976639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:49.186 [2024-12-05 19:42:07.976647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:49.186 [2024-12-05 19:42:07.976655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:49.186 [2024-12-05 19:42:07.976663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:49.186 [2024-12-05 19:42:07.976670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:49.186 [2024-12-05 19:42:07.976678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:49.186 [2024-12-05 19:42:07.976685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:49.186 [2024-12-05 19:42:07.976694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:49.186 [2024-12-05 19:42:07.976702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:49.186 [2024-12-05 19:42:07.976710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:49.186 [2024-12-05 19:42:07.976717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:49.186 [2024-12-05 19:42:07.976726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:49.186 [2024-12-05 19:42:07.976733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:49.186 [2024-12-05 19:42:07.976742] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:49.186 [2024-12-05 19:42:07.976751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:49.186 [2024-12-05 19:42:07.976761] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:49.186 [2024-12-05 19:42:07.976768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:49.186 [2024-12-05 19:42:07.976777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:49.186 [2024-12-05 19:42:07.976785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:49.186 [2024-12-05 19:42:07.976793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.186 [2024-12-05 19:42:07.976800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:49.186 [2024-12-05 19:42:07.976809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:23:49.186 [2024-12-05 19:42:07.976817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.186 [2024-12-05 19:42:08.002838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.186 [2024-12-05 19:42:08.003039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:49.186 [2024-12-05 19:42:08.003062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.951 ms 00:23:49.186 [2024-12-05 19:42:08.003074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.186 [2024-12-05 19:42:08.003246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.186 [2024-12-05 19:42:08.003257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:49.186 [2024-12-05 19:42:08.003268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:49.186 [2024-12-05 19:42:08.003275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.186 [2024-12-05 19:42:08.033759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.186 [2024-12-05 19:42:08.033952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:49.186 [2024-12-05 19:42:08.033973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.458 ms 00:23:49.186 [2024-12-05 19:42:08.033981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.186 [2024-12-05 19:42:08.034087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.186 [2024-12-05 19:42:08.034097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:49.186 [2024-12-05 19:42:08.034108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:49.186 [2024-12-05 19:42:08.034116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.186 [2024-12-05 19:42:08.034463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.186 [2024-12-05 19:42:08.034479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:49.186 [2024-12-05 19:42:08.034491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:23:49.186 [2024-12-05 19:42:08.034499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.186 [2024-12-05 19:42:08.034628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.186 [2024-12-05 19:42:08.034636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:49.186 [2024-12-05 19:42:08.034645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:23:49.186 [2024-12-05 19:42:08.034652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.187 [2024-12-05 19:42:08.052643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.187 [2024-12-05 19:42:08.052881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:49.187 [2024-12-05 19:42:08.052912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.963 ms 00:23:49.187 [2024-12-05 19:42:08.052925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.187 [2024-12-05 19:42:08.085632] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:49.187 [2024-12-05 19:42:08.085696] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:49.187 [2024-12-05 19:42:08.085714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.187 [2024-12-05 19:42:08.085724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:49.187 [2024-12-05 19:42:08.085736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.595 ms 00:23:49.187 [2024-12-05 19:42:08.085750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.187 [2024-12-05 19:42:08.110910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.187 [2024-12-05 19:42:08.110982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:49.187 [2024-12-05 19:42:08.110998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.040 ms 00:23:49.187 [2024-12-05 19:42:08.111005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.187 [2024-12-05 19:42:08.123526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.187 [2024-12-05 19:42:08.123578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:49.187 [2024-12-05 19:42:08.123594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.425 ms 00:23:49.187 [2024-12-05 19:42:08.123602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.187 [2024-12-05 19:42:08.135761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.187 [2024-12-05 19:42:08.135814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:49.187 [2024-12-05 19:42:08.135828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.046 ms 00:23:49.187 [2024-12-05 19:42:08.135835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.187 [2024-12-05 19:42:08.136510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.187 [2024-12-05 19:42:08.136530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:49.187 [2024-12-05 19:42:08.136541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:23:49.187 [2024-12-05 19:42:08.136548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.192548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.554 [2024-12-05 19:42:08.192614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:49.554 [2024-12-05 19:42:08.192630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.969 ms 00:23:49.554 [2024-12-05 19:42:08.192638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.203584] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:49.554 [2024-12-05 19:42:08.217965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.554 [2024-12-05 19:42:08.218027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:49.554 [2024-12-05 19:42:08.218042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.209 ms 00:23:49.554 [2024-12-05 19:42:08.218051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.218167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.554 [2024-12-05 19:42:08.218180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:49.554 [2024-12-05 19:42:08.218189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:49.554 [2024-12-05 19:42:08.218198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.218244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.554 [2024-12-05 19:42:08.218255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:49.554 [2024-12-05 19:42:08.218263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:49.554 [2024-12-05 19:42:08.218274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.218297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.554 [2024-12-05 19:42:08.218307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:49.554 [2024-12-05 19:42:08.218315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:49.554 [2024-12-05 19:42:08.218326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.218354] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:49.554 [2024-12-05 19:42:08.218367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.554 [2024-12-05 19:42:08.218377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:49.554 [2024-12-05 19:42:08.218386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:49.554 [2024-12-05 19:42:08.218393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.242318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.554 [2024-12-05 19:42:08.242371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:49.554 [2024-12-05 19:42:08.242386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.898 ms 00:23:49.554 [2024-12-05 19:42:08.242395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.242503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.554 [2024-12-05 19:42:08.242514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:49.554 [2024-12-05 19:42:08.242525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:49.554 [2024-12-05 19:42:08.242535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.243366] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:49.554 [2024-12-05 19:42:08.246496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 293.749 ms, result 0 00:23:49.554 [2024-12-05 19:42:08.247262] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:49.554 Some configs were skipped because the RPC state that can call them passed over. 00:23:49.554 19:42:08 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:49.554 [2024-12-05 19:42:08.493167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.554 [2024-12-05 19:42:08.493395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:49.554 [2024-12-05 19:42:08.493527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.095 ms 00:23:49.554 [2024-12-05 19:42:08.493563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.554 [2024-12-05 19:42:08.493625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 13.555 ms, result 0 00:23:49.554 true 00:23:49.554 19:42:08 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:49.814 [2024-12-05 19:42:08.705361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:49.814 [2024-12-05 19:42:08.705552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:49.814 [2024-12-05 19:42:08.705607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:23:49.814 [2024-12-05 19:42:08.705629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:49.814 [2024-12-05 19:42:08.705685] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.349 ms, result 0 00:23:49.814 true 00:23:49.814 19:42:08 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 76850 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 76850 ']' 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 76850 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76850 00:23:49.814 killing process with pid 76850 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76850' 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 76850 00:23:49.814 19:42:08 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 76850 00:23:50.753 [2024-12-05 19:42:09.453470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.453531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:50.753 [2024-12-05 19:42:09.453545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:50.753 [2024-12-05 19:42:09.453555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.453579] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:50.753 [2024-12-05 19:42:09.456187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.456221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:50.753 [2024-12-05 19:42:09.456237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.590 ms 00:23:50.753 [2024-12-05 19:42:09.456245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.456555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.456570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:50.753 [2024-12-05 19:42:09.456580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:23:50.753 [2024-12-05 19:42:09.456588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.460572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.460603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:50.753 [2024-12-05 19:42:09.460616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.962 ms 00:23:50.753 [2024-12-05 19:42:09.460624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.467907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.468072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:50.753 [2024-12-05 19:42:09.468098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.245 ms 00:23:50.753 [2024-12-05 19:42:09.468105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.477548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.477595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:50.753 [2024-12-05 19:42:09.477611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.361 ms 00:23:50.753 [2024-12-05 19:42:09.477619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.484803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.484851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:50.753 [2024-12-05 19:42:09.484864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.139 ms 00:23:50.753 [2024-12-05 19:42:09.484872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.485011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.485021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:50.753 [2024-12-05 19:42:09.485031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:50.753 [2024-12-05 19:42:09.485039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.494823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.495028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:50.753 [2024-12-05 19:42:09.495050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.756 ms 00:23:50.753 [2024-12-05 19:42:09.495058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.505737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.505910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:50.753 [2024-12-05 19:42:09.505937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.485 ms 00:23:50.753 [2024-12-05 19:42:09.505945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.514978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.515027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:50.753 [2024-12-05 19:42:09.515039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.986 ms 00:23:50.753 [2024-12-05 19:42:09.515046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.524339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.753 [2024-12-05 19:42:09.524511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:50.753 [2024-12-05 19:42:09.524531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.200 ms 00:23:50.753 [2024-12-05 19:42:09.524539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.753 [2024-12-05 19:42:09.524576] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:50.753 [2024-12-05 19:42:09.524592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.524996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:50.753 [2024-12-05 19:42:09.525146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:50.754 [2024-12-05 19:42:09.525470] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:50.754 [2024-12-05 19:42:09.525483] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b576645-d761-45ff-acc8-8625a1d5c445 00:23:50.754 [2024-12-05 19:42:09.525494] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:50.754 [2024-12-05 19:42:09.525502] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:50.754 [2024-12-05 19:42:09.525509] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:50.754 [2024-12-05 19:42:09.525518] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:50.754 [2024-12-05 19:42:09.525525] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:50.754 [2024-12-05 19:42:09.525534] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:50.754 [2024-12-05 19:42:09.525541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:50.754 [2024-12-05 19:42:09.525549] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:50.754 [2024-12-05 19:42:09.525555] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:50.754 [2024-12-05 19:42:09.525564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.754 [2024-12-05 19:42:09.525571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:50.754 [2024-12-05 19:42:09.525581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:23:50.754 [2024-12-05 19:42:09.525587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.538082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.754 [2024-12-05 19:42:09.538140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:50.754 [2024-12-05 19:42:09.538157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.438 ms 00:23:50.754 [2024-12-05 19:42:09.538165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.538548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:50.754 [2024-12-05 19:42:09.538568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:50.754 [2024-12-05 19:42:09.538581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:23:50.754 [2024-12-05 19:42:09.538588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.582243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.582291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:50.754 [2024-12-05 19:42:09.582306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.582315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.582438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.582448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:50.754 [2024-12-05 19:42:09.582461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.582469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.582517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.582526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:50.754 [2024-12-05 19:42:09.582537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.582544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.582562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.582570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:50.754 [2024-12-05 19:42:09.582579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.582587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.659630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.659847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:50.754 [2024-12-05 19:42:09.659869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.659877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.723520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.723575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:50.754 [2024-12-05 19:42:09.723588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.723598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.723681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.723691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:50.754 [2024-12-05 19:42:09.723704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.723711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.723740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.723748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:50.754 [2024-12-05 19:42:09.723757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.723764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.723853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.723863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:50.754 [2024-12-05 19:42:09.723873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.723879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.723912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.723920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:50.754 [2024-12-05 19:42:09.723929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.723936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.723973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.723982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:50.754 [2024-12-05 19:42:09.723992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.724000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.724042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:50.754 [2024-12-05 19:42:09.724051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:50.754 [2024-12-05 19:42:09.724060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:50.754 [2024-12-05 19:42:09.724068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:50.754 [2024-12-05 19:42:09.724221] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.729 ms, result 0 00:23:51.694 19:42:10 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:51.694 19:42:10 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:51.694 [2024-12-05 19:42:10.696272] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:23:51.694 [2024-12-05 19:42:10.696406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76908 ] 00:23:51.954 [2024-12-05 19:42:10.857188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.954 [2024-12-05 19:42:10.948622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.212 [2024-12-05 19:42:11.209184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:52.212 [2024-12-05 19:42:11.209258] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:52.473 [2024-12-05 19:42:11.363498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.473 [2024-12-05 19:42:11.363731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:52.473 [2024-12-05 19:42:11.363751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:52.473 [2024-12-05 19:42:11.363759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.473 [2024-12-05 19:42:11.366462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.473 [2024-12-05 19:42:11.366501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:52.473 [2024-12-05 19:42:11.366512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.679 ms 00:23:52.473 [2024-12-05 19:42:11.366519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.473 [2024-12-05 19:42:11.366664] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:52.473 [2024-12-05 19:42:11.367365] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:52.473 [2024-12-05 19:42:11.367392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.473 [2024-12-05 19:42:11.367400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:52.473 [2024-12-05 19:42:11.367409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.739 ms 00:23:52.473 [2024-12-05 19:42:11.367417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.473 [2024-12-05 19:42:11.368536] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:52.474 [2024-12-05 19:42:11.381066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.474 [2024-12-05 19:42:11.381117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:52.474 [2024-12-05 19:42:11.381147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.530 ms 00:23:52.474 [2024-12-05 19:42:11.381156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.474 [2024-12-05 19:42:11.381283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.474 [2024-12-05 19:42:11.381295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:52.474 [2024-12-05 19:42:11.381304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:52.474 [2024-12-05 19:42:11.381312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.474 [2024-12-05 19:42:11.386767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.474 [2024-12-05 19:42:11.386813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:52.474 [2024-12-05 19:42:11.386824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.411 ms 00:23:52.474 [2024-12-05 19:42:11.386831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.474 [2024-12-05 19:42:11.386942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.474 [2024-12-05 19:42:11.386952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:52.474 [2024-12-05 19:42:11.386960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:52.474 [2024-12-05 19:42:11.386968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.474 [2024-12-05 19:42:11.386995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.474 [2024-12-05 19:42:11.387004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:52.474 [2024-12-05 19:42:11.387012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:52.474 [2024-12-05 19:42:11.387019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.474 [2024-12-05 19:42:11.387040] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:52.474 [2024-12-05 19:42:11.390679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.474 [2024-12-05 19:42:11.390714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:52.474 [2024-12-05 19:42:11.390724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:23:52.474 [2024-12-05 19:42:11.390731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.474 [2024-12-05 19:42:11.390778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.474 [2024-12-05 19:42:11.390787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:52.474 [2024-12-05 19:42:11.390795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:52.474 [2024-12-05 19:42:11.390802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.474 [2024-12-05 19:42:11.390822] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:52.474 [2024-12-05 19:42:11.390840] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:52.474 [2024-12-05 19:42:11.390875] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:52.474 [2024-12-05 19:42:11.390890] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:52.474 [2024-12-05 19:42:11.390991] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:52.474 [2024-12-05 19:42:11.391001] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:52.474 [2024-12-05 19:42:11.391011] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:52.474 [2024-12-05 19:42:11.391024] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:52.474 [2024-12-05 19:42:11.391033] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:52.474 [2024-12-05 19:42:11.391041] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:52.474 [2024-12-05 19:42:11.391048] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:52.474 [2024-12-05 19:42:11.391055] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:52.474 [2024-12-05 19:42:11.391062] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:52.474 [2024-12-05 19:42:11.391070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.474 [2024-12-05 19:42:11.391077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:52.474 [2024-12-05 19:42:11.391084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:23:52.474 [2024-12-05 19:42:11.391091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.474 [2024-12-05 19:42:11.391218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.474 [2024-12-05 19:42:11.391232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:52.474 [2024-12-05 19:42:11.391239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:52.474 [2024-12-05 19:42:11.391246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.474 [2024-12-05 19:42:11.391364] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:52.474 [2024-12-05 19:42:11.391375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:52.474 [2024-12-05 19:42:11.391383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.474 [2024-12-05 19:42:11.391390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:52.474 [2024-12-05 19:42:11.391404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:52.474 [2024-12-05 19:42:11.391417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:52.474 [2024-12-05 19:42:11.391424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.474 [2024-12-05 19:42:11.391437] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:52.474 [2024-12-05 19:42:11.391450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:52.474 [2024-12-05 19:42:11.391457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.474 [2024-12-05 19:42:11.391463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:52.474 [2024-12-05 19:42:11.391470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:52.474 [2024-12-05 19:42:11.391476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:52.474 [2024-12-05 19:42:11.391488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:52.474 [2024-12-05 19:42:11.391494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:52.474 [2024-12-05 19:42:11.391507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391514] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.474 [2024-12-05 19:42:11.391522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:52.474 [2024-12-05 19:42:11.391528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.474 [2024-12-05 19:42:11.391541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:52.474 [2024-12-05 19:42:11.391548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.474 [2024-12-05 19:42:11.391561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:52.474 [2024-12-05 19:42:11.391567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.474 [2024-12-05 19:42:11.391580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:52.474 [2024-12-05 19:42:11.391586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:52.474 [2024-12-05 19:42:11.391593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.474 [2024-12-05 19:42:11.391599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:52.474 [2024-12-05 19:42:11.391606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:52.474 [2024-12-05 19:42:11.391612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.475 [2024-12-05 19:42:11.391619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:52.475 [2024-12-05 19:42:11.391626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:52.475 [2024-12-05 19:42:11.391632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.475 [2024-12-05 19:42:11.391638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:52.475 [2024-12-05 19:42:11.391644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:52.475 [2024-12-05 19:42:11.391650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.475 [2024-12-05 19:42:11.391657] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:52.475 [2024-12-05 19:42:11.391665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:52.475 [2024-12-05 19:42:11.391673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.475 [2024-12-05 19:42:11.391680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.475 [2024-12-05 19:42:11.391687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:52.475 [2024-12-05 19:42:11.391694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:52.475 [2024-12-05 19:42:11.391701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:52.475 [2024-12-05 19:42:11.391707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:52.475 [2024-12-05 19:42:11.391713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:52.475 [2024-12-05 19:42:11.391719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:52.475 [2024-12-05 19:42:11.391727] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:52.475 [2024-12-05 19:42:11.391736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.475 [2024-12-05 19:42:11.391745] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:52.475 [2024-12-05 19:42:11.391752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:52.475 [2024-12-05 19:42:11.391759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:52.475 [2024-12-05 19:42:11.391766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:52.475 [2024-12-05 19:42:11.391772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:52.475 [2024-12-05 19:42:11.391779] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:52.475 [2024-12-05 19:42:11.391786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:52.475 [2024-12-05 19:42:11.391793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:52.475 [2024-12-05 19:42:11.391800] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:52.475 [2024-12-05 19:42:11.391806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:52.475 [2024-12-05 19:42:11.391813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:52.475 [2024-12-05 19:42:11.391821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:52.475 [2024-12-05 19:42:11.391827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:52.475 [2024-12-05 19:42:11.391834] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:52.475 [2024-12-05 19:42:11.391841] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:52.475 [2024-12-05 19:42:11.391849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.475 [2024-12-05 19:42:11.391857] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:52.475 [2024-12-05 19:42:11.391864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:52.475 [2024-12-05 19:42:11.391870] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:52.475 [2024-12-05 19:42:11.391878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:52.475 [2024-12-05 19:42:11.391885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.475 [2024-12-05 19:42:11.391895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:52.475 [2024-12-05 19:42:11.391902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:23:52.475 [2024-12-05 19:42:11.391909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.475 [2024-12-05 19:42:11.418030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.475 [2024-12-05 19:42:11.418076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:52.475 [2024-12-05 19:42:11.418088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.064 ms 00:23:52.475 [2024-12-05 19:42:11.418096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.475 [2024-12-05 19:42:11.418278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.475 [2024-12-05 19:42:11.418289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:52.475 [2024-12-05 19:42:11.418298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:23:52.475 [2024-12-05 19:42:11.418305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.475 [2024-12-05 19:42:11.461384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.475 [2024-12-05 19:42:11.461443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:52.475 [2024-12-05 19:42:11.461459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.056 ms 00:23:52.475 [2024-12-05 19:42:11.461467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.475 [2024-12-05 19:42:11.461589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.475 [2024-12-05 19:42:11.461601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:52.475 [2024-12-05 19:42:11.461610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:52.475 [2024-12-05 19:42:11.461618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.475 [2024-12-05 19:42:11.461952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.475 [2024-12-05 19:42:11.461974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:52.475 [2024-12-05 19:42:11.461989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:23:52.475 [2024-12-05 19:42:11.462014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.475 [2024-12-05 19:42:11.462160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.475 [2024-12-05 19:42:11.462171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:52.475 [2024-12-05 19:42:11.462179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:23:52.475 [2024-12-05 19:42:11.462186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.475 [2024-12-05 19:42:11.475411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.475 [2024-12-05 19:42:11.475449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:52.475 [2024-12-05 19:42:11.475460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.203 ms 00:23:52.475 [2024-12-05 19:42:11.475468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.488647] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:52.735 [2024-12-05 19:42:11.488711] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:52.735 [2024-12-05 19:42:11.488731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.488744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:52.735 [2024-12-05 19:42:11.488761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.145 ms 00:23:52.735 [2024-12-05 19:42:11.488773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.519097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.519178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:52.735 [2024-12-05 19:42:11.519194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.179 ms 00:23:52.735 [2024-12-05 19:42:11.519202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.531508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.531557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:52.735 [2024-12-05 19:42:11.531570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.169 ms 00:23:52.735 [2024-12-05 19:42:11.531577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.543150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.543196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:52.735 [2024-12-05 19:42:11.543208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.474 ms 00:23:52.735 [2024-12-05 19:42:11.543215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.543860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.543879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:52.735 [2024-12-05 19:42:11.543888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:23:52.735 [2024-12-05 19:42:11.543895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.599411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.599472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:52.735 [2024-12-05 19:42:11.599486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.491 ms 00:23:52.735 [2024-12-05 19:42:11.599494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.610325] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:52.735 [2024-12-05 19:42:11.624861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.624906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:52.735 [2024-12-05 19:42:11.624918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.247 ms 00:23:52.735 [2024-12-05 19:42:11.624930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.625032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.625042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:52.735 [2024-12-05 19:42:11.625051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:52.735 [2024-12-05 19:42:11.625059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.625106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.625114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:52.735 [2024-12-05 19:42:11.625122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:52.735 [2024-12-05 19:42:11.625160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.625191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.625200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:52.735 [2024-12-05 19:42:11.625208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:52.735 [2024-12-05 19:42:11.625215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.625246] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:52.735 [2024-12-05 19:42:11.625255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.625263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:52.735 [2024-12-05 19:42:11.625270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:52.735 [2024-12-05 19:42:11.625277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.648615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.648662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:52.735 [2024-12-05 19:42:11.648675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.315 ms 00:23:52.735 [2024-12-05 19:42:11.648683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.648781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.735 [2024-12-05 19:42:11.648791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:52.735 [2024-12-05 19:42:11.648800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:52.735 [2024-12-05 19:42:11.648807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.735 [2024-12-05 19:42:11.649642] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:52.735 [2024-12-05 19:42:11.652911] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 285.871 ms, result 0 00:23:52.735 [2024-12-05 19:42:11.653578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:52.735 [2024-12-05 19:42:11.666568] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:53.724  [2024-12-05T19:42:14.106Z] Copying: 44/256 [MB] (44 MBps) [2024-12-05T19:42:14.673Z] Copying: 88/256 [MB] (44 MBps) [2024-12-05T19:42:16.084Z] Copying: 130/256 [MB] (41 MBps) [2024-12-05T19:42:17.017Z] Copying: 172/256 [MB] (42 MBps) [2024-12-05T19:42:17.953Z] Copying: 215/256 [MB] (42 MBps) [2024-12-05T19:42:17.953Z] Copying: 256/256 [MB] (average 42 MBps)[2024-12-05 19:42:17.623942] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:58.947 [2024-12-05 19:42:17.632995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.633042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:58.947 [2024-12-05 19:42:17.633065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:58.947 [2024-12-05 19:42:17.633074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.633097] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:58.947 [2024-12-05 19:42:17.635691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.635723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:58.947 [2024-12-05 19:42:17.635734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.581 ms 00:23:58.947 [2024-12-05 19:42:17.635742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.636006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.636017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:58.947 [2024-12-05 19:42:17.636032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:23:58.947 [2024-12-05 19:42:17.636040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.639734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.639754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:58.947 [2024-12-05 19:42:17.639764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.676 ms 00:23:58.947 [2024-12-05 19:42:17.639772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.646691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.646911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:58.947 [2024-12-05 19:42:17.646927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.901 ms 00:23:58.947 [2024-12-05 19:42:17.646935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.670442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.670490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:58.947 [2024-12-05 19:42:17.670502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.442 ms 00:23:58.947 [2024-12-05 19:42:17.670511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.685309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.685504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:58.947 [2024-12-05 19:42:17.685528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.744 ms 00:23:58.947 [2024-12-05 19:42:17.685536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.685688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.685700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:58.947 [2024-12-05 19:42:17.685716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:58.947 [2024-12-05 19:42:17.685724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.710009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.710061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:58.947 [2024-12-05 19:42:17.710074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.267 ms 00:23:58.947 [2024-12-05 19:42:17.710081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.733638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.733691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:58.947 [2024-12-05 19:42:17.733704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.520 ms 00:23:58.947 [2024-12-05 19:42:17.733712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.756779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.756977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:58.947 [2024-12-05 19:42:17.756995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.026 ms 00:23:58.947 [2024-12-05 19:42:17.757004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.781327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.947 [2024-12-05 19:42:17.781381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:58.947 [2024-12-05 19:42:17.781394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.255 ms 00:23:58.947 [2024-12-05 19:42:17.781403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.947 [2024-12-05 19:42:17.781436] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:58.947 [2024-12-05 19:42:17.781451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:58.947 [2024-12-05 19:42:17.781716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.781984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:58.948 [2024-12-05 19:42:17.782332] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:58.948 [2024-12-05 19:42:17.782339] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b576645-d761-45ff-acc8-8625a1d5c445 00:23:58.948 [2024-12-05 19:42:17.782347] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:58.948 [2024-12-05 19:42:17.782354] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:58.948 [2024-12-05 19:42:17.782361] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:58.948 [2024-12-05 19:42:17.782369] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:58.948 [2024-12-05 19:42:17.782377] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:58.948 [2024-12-05 19:42:17.782385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:58.948 [2024-12-05 19:42:17.782395] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:58.948 [2024-12-05 19:42:17.782401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:58.948 [2024-12-05 19:42:17.782409] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:58.948 [2024-12-05 19:42:17.782420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.948 [2024-12-05 19:42:17.782433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:58.948 [2024-12-05 19:42:17.782446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.985 ms 00:23:58.948 [2024-12-05 19:42:17.782453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.948 [2024-12-05 19:42:17.798890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.948 [2024-12-05 19:42:17.798948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:58.948 [2024-12-05 19:42:17.798965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.393 ms 00:23:58.948 [2024-12-05 19:42:17.798976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.948 [2024-12-05 19:42:17.799540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:58.948 [2024-12-05 19:42:17.799717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:58.948 [2024-12-05 19:42:17.799740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:23:58.948 [2024-12-05 19:42:17.799753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.948 [2024-12-05 19:42:17.846471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.948 [2024-12-05 19:42:17.846681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:58.949 [2024-12-05 19:42:17.846700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.949 [2024-12-05 19:42:17.846714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.949 [2024-12-05 19:42:17.846818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.949 [2024-12-05 19:42:17.846830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:58.949 [2024-12-05 19:42:17.846839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.949 [2024-12-05 19:42:17.846847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.949 [2024-12-05 19:42:17.846895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.949 [2024-12-05 19:42:17.846906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:58.949 [2024-12-05 19:42:17.846914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.949 [2024-12-05 19:42:17.846922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.949 [2024-12-05 19:42:17.846943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.949 [2024-12-05 19:42:17.846951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:58.949 [2024-12-05 19:42:17.846959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.949 [2024-12-05 19:42:17.846967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:58.949 [2024-12-05 19:42:17.929774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:58.949 [2024-12-05 19:42:17.929976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:58.949 [2024-12-05 19:42:17.930013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:58.949 [2024-12-05 19:42:17.930021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.207 [2024-12-05 19:42:17.992653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:59.207 [2024-12-05 19:42:17.992848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.207 [2024-12-05 19:42:17.992864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:59.207 [2024-12-05 19:42:17.992874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.207 [2024-12-05 19:42:17.992935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:59.207 [2024-12-05 19:42:17.992944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.207 [2024-12-05 19:42:17.992952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:59.207 [2024-12-05 19:42:17.992960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.207 [2024-12-05 19:42:17.992987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:59.207 [2024-12-05 19:42:17.993003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.207 [2024-12-05 19:42:17.993011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:59.207 [2024-12-05 19:42:17.993018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.207 [2024-12-05 19:42:17.993109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:59.207 [2024-12-05 19:42:17.993119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.207 [2024-12-05 19:42:17.993152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:59.207 [2024-12-05 19:42:17.993161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.207 [2024-12-05 19:42:17.993192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:59.207 [2024-12-05 19:42:17.993202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:59.207 [2024-12-05 19:42:17.993212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:59.208 [2024-12-05 19:42:17.993220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.208 [2024-12-05 19:42:17.993255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:59.208 [2024-12-05 19:42:17.993264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:59.208 [2024-12-05 19:42:17.993272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:59.208 [2024-12-05 19:42:17.993279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.208 [2024-12-05 19:42:17.993320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:59.208 [2024-12-05 19:42:17.993333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:59.208 [2024-12-05 19:42:17.993340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:59.208 [2024-12-05 19:42:17.993347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.208 [2024-12-05 19:42:17.993475] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 360.481 ms, result 0 00:24:00.141 00:24:00.141 00:24:00.141 19:42:19 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:24:00.141 19:42:19 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:00.706 19:42:19 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:00.706 [2024-12-05 19:42:19.700640] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:24:00.706 [2024-12-05 19:42:19.700769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77007 ] 00:24:00.963 [2024-12-05 19:42:19.857826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.963 [2024-12-05 19:42:19.957923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.221 [2024-12-05 19:42:20.216533] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:01.221 [2024-12-05 19:42:20.216607] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:01.480 [2024-12-05 19:42:20.370009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.480 [2024-12-05 19:42:20.370068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:01.480 [2024-12-05 19:42:20.370080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:01.480 [2024-12-05 19:42:20.370089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.480 [2024-12-05 19:42:20.372810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.480 [2024-12-05 19:42:20.372850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:01.480 [2024-12-05 19:42:20.372862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.701 ms 00:24:01.480 [2024-12-05 19:42:20.372871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.480 [2024-12-05 19:42:20.372953] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:01.480 [2024-12-05 19:42:20.373952] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:01.480 [2024-12-05 19:42:20.374148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.480 [2024-12-05 19:42:20.374162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:01.480 [2024-12-05 19:42:20.374172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.203 ms 00:24:01.480 [2024-12-05 19:42:20.374180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.480 [2024-12-05 19:42:20.375419] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:01.480 [2024-12-05 19:42:20.387839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.480 [2024-12-05 19:42:20.387885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:01.480 [2024-12-05 19:42:20.387898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.419 ms 00:24:01.480 [2024-12-05 19:42:20.387906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.480 [2024-12-05 19:42:20.388023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.480 [2024-12-05 19:42:20.388035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:01.480 [2024-12-05 19:42:20.388043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:01.480 [2024-12-05 19:42:20.388050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.480 [2024-12-05 19:42:20.393193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.480 [2024-12-05 19:42:20.393369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:01.480 [2024-12-05 19:42:20.393385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.099 ms 00:24:01.480 [2024-12-05 19:42:20.393392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.480 [2024-12-05 19:42:20.393493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.480 [2024-12-05 19:42:20.393503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:01.480 [2024-12-05 19:42:20.393512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:01.480 [2024-12-05 19:42:20.393520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.480 [2024-12-05 19:42:20.393549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.480 [2024-12-05 19:42:20.393557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:01.480 [2024-12-05 19:42:20.393565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:01.480 [2024-12-05 19:42:20.393572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.480 [2024-12-05 19:42:20.393594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:01.480 [2024-12-05 19:42:20.396856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.480 [2024-12-05 19:42:20.396984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:01.480 [2024-12-05 19:42:20.396999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.268 ms 00:24:01.480 [2024-12-05 19:42:20.397008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.481 [2024-12-05 19:42:20.397053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.481 [2024-12-05 19:42:20.397062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:01.481 [2024-12-05 19:42:20.397070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:01.481 [2024-12-05 19:42:20.397079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.481 [2024-12-05 19:42:20.397099] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:01.481 [2024-12-05 19:42:20.397118] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:01.481 [2024-12-05 19:42:20.397168] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:01.481 [2024-12-05 19:42:20.397184] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:01.481 [2024-12-05 19:42:20.397287] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:01.481 [2024-12-05 19:42:20.397298] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:01.481 [2024-12-05 19:42:20.397308] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:01.481 [2024-12-05 19:42:20.397321] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397330] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397337] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:01.481 [2024-12-05 19:42:20.397344] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:01.481 [2024-12-05 19:42:20.397352] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:01.481 [2024-12-05 19:42:20.397358] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:01.481 [2024-12-05 19:42:20.397366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.481 [2024-12-05 19:42:20.397373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:01.481 [2024-12-05 19:42:20.397381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:24:01.481 [2024-12-05 19:42:20.397388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.481 [2024-12-05 19:42:20.397475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.481 [2024-12-05 19:42:20.397486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:01.481 [2024-12-05 19:42:20.397493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:01.481 [2024-12-05 19:42:20.397499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.481 [2024-12-05 19:42:20.397617] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:01.481 [2024-12-05 19:42:20.397628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:01.481 [2024-12-05 19:42:20.397636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:01.481 [2024-12-05 19:42:20.397659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:01.481 [2024-12-05 19:42:20.397680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:01.481 [2024-12-05 19:42:20.397694] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:01.481 [2024-12-05 19:42:20.397707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:01.481 [2024-12-05 19:42:20.397713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:01.481 [2024-12-05 19:42:20.397720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:01.481 [2024-12-05 19:42:20.397727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:01.481 [2024-12-05 19:42:20.397733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:01.481 [2024-12-05 19:42:20.397746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:01.481 [2024-12-05 19:42:20.397766] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397773] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:01.481 [2024-12-05 19:42:20.397786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:01.481 [2024-12-05 19:42:20.397805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:01.481 [2024-12-05 19:42:20.397825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:01.481 [2024-12-05 19:42:20.397843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:01.481 [2024-12-05 19:42:20.397856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:01.481 [2024-12-05 19:42:20.397863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:01.481 [2024-12-05 19:42:20.397869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:01.481 [2024-12-05 19:42:20.397875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:01.481 [2024-12-05 19:42:20.397882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:01.481 [2024-12-05 19:42:20.397888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:01.481 [2024-12-05 19:42:20.397901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:01.481 [2024-12-05 19:42:20.397907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397914] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:01.481 [2024-12-05 19:42:20.397922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:01.481 [2024-12-05 19:42:20.397931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.481 [2024-12-05 19:42:20.397945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:01.481 [2024-12-05 19:42:20.397951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:01.481 [2024-12-05 19:42:20.397958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:01.481 [2024-12-05 19:42:20.397965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:01.481 [2024-12-05 19:42:20.397971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:01.481 [2024-12-05 19:42:20.397977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:01.481 [2024-12-05 19:42:20.397986] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:01.481 [2024-12-05 19:42:20.398015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:01.482 [2024-12-05 19:42:20.398024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:01.482 [2024-12-05 19:42:20.398032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:01.482 [2024-12-05 19:42:20.398039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:01.482 [2024-12-05 19:42:20.398046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:01.482 [2024-12-05 19:42:20.398054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:01.482 [2024-12-05 19:42:20.398061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:01.482 [2024-12-05 19:42:20.398068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:01.482 [2024-12-05 19:42:20.398075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:01.482 [2024-12-05 19:42:20.398082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:01.482 [2024-12-05 19:42:20.398089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:01.482 [2024-12-05 19:42:20.398096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:01.482 [2024-12-05 19:42:20.398103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:01.482 [2024-12-05 19:42:20.398110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:01.482 [2024-12-05 19:42:20.398117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:01.482 [2024-12-05 19:42:20.398124] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:01.482 [2024-12-05 19:42:20.398149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:01.482 [2024-12-05 19:42:20.398158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:01.482 [2024-12-05 19:42:20.398166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:01.482 [2024-12-05 19:42:20.398174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:01.482 [2024-12-05 19:42:20.398181] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:01.482 [2024-12-05 19:42:20.398188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.482 [2024-12-05 19:42:20.398200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:01.482 [2024-12-05 19:42:20.398208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:24:01.482 [2024-12-05 19:42:20.398215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.482 [2024-12-05 19:42:20.423959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.482 [2024-12-05 19:42:20.424152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:01.482 [2024-12-05 19:42:20.424170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.663 ms 00:24:01.482 [2024-12-05 19:42:20.424178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.482 [2024-12-05 19:42:20.424325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.482 [2024-12-05 19:42:20.424335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:01.482 [2024-12-05 19:42:20.424343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:01.482 [2024-12-05 19:42:20.424350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.482 [2024-12-05 19:42:20.471678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.482 [2024-12-05 19:42:20.471724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:01.482 [2024-12-05 19:42:20.471740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.305 ms 00:24:01.482 [2024-12-05 19:42:20.471749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.482 [2024-12-05 19:42:20.471865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.482 [2024-12-05 19:42:20.471878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:01.482 [2024-12-05 19:42:20.471887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:01.482 [2024-12-05 19:42:20.471894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.482 [2024-12-05 19:42:20.472236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.482 [2024-12-05 19:42:20.472252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:01.482 [2024-12-05 19:42:20.472267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:24:01.482 [2024-12-05 19:42:20.472275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.482 [2024-12-05 19:42:20.472405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.482 [2024-12-05 19:42:20.472414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:01.482 [2024-12-05 19:42:20.472422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:24:01.482 [2024-12-05 19:42:20.472429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.739 [2024-12-05 19:42:20.485754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.485793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:01.740 [2024-12-05 19:42:20.485805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.304 ms 00:24:01.740 [2024-12-05 19:42:20.485813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.497942] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:24:01.740 [2024-12-05 19:42:20.497998] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:01.740 [2024-12-05 19:42:20.498011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.498020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:01.740 [2024-12-05 19:42:20.498030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.075 ms 00:24:01.740 [2024-12-05 19:42:20.498038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.522201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.522256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:01.740 [2024-12-05 19:42:20.522269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.049 ms 00:24:01.740 [2024-12-05 19:42:20.522278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.534189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.534240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:01.740 [2024-12-05 19:42:20.534252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.801 ms 00:24:01.740 [2024-12-05 19:42:20.534259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.545657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.545830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:01.740 [2024-12-05 19:42:20.545847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.303 ms 00:24:01.740 [2024-12-05 19:42:20.545855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.546524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.546545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:01.740 [2024-12-05 19:42:20.546554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:24:01.740 [2024-12-05 19:42:20.546562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.601751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.601809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:01.740 [2024-12-05 19:42:20.601823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.164 ms 00:24:01.740 [2024-12-05 19:42:20.601831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.612745] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:01.740 [2024-12-05 19:42:20.627258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.627312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:01.740 [2024-12-05 19:42:20.627323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.304 ms 00:24:01.740 [2024-12-05 19:42:20.627336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.627435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.627446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:01.740 [2024-12-05 19:42:20.627454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:01.740 [2024-12-05 19:42:20.627462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.627510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.627519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:01.740 [2024-12-05 19:42:20.627526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:01.740 [2024-12-05 19:42:20.627536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.627565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.627574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:01.740 [2024-12-05 19:42:20.627582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:01.740 [2024-12-05 19:42:20.627589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.627621] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:01.740 [2024-12-05 19:42:20.627631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.627638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:01.740 [2024-12-05 19:42:20.627645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:01.740 [2024-12-05 19:42:20.627652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.651745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.651798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:01.740 [2024-12-05 19:42:20.651810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.073 ms 00:24:01.740 [2024-12-05 19:42:20.651818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.651929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.740 [2024-12-05 19:42:20.651940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:01.740 [2024-12-05 19:42:20.651949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:01.740 [2024-12-05 19:42:20.651956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.740 [2024-12-05 19:42:20.652811] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:01.740 [2024-12-05 19:42:20.656232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 282.514 ms, result 0 00:24:01.740 [2024-12-05 19:42:20.656885] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:01.740 [2024-12-05 19:42:20.670049] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:02.000  [2024-12-05T19:42:21.006Z] Copying: 4096/4096 [kB] (average 40 MBps)[2024-12-05 19:42:20.774091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.000 [2024-12-05 19:42:20.783251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.000 [2024-12-05 19:42:20.783309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.000 [2024-12-05 19:42:20.783330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:02.000 [2024-12-05 19:42:20.783339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.000 [2024-12-05 19:42:20.783361] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:02.000 [2024-12-05 19:42:20.785954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.000 [2024-12-05 19:42:20.785996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.000 [2024-12-05 19:42:20.786007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:24:02.000 [2024-12-05 19:42:20.786015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.000 [2024-12-05 19:42:20.787411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.000 [2024-12-05 19:42:20.787441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.000 [2024-12-05 19:42:20.787450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.372 ms 00:24:02.000 [2024-12-05 19:42:20.787458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.791424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.001 [2024-12-05 19:42:20.791448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.001 [2024-12-05 19:42:20.791458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.948 ms 00:24:02.001 [2024-12-05 19:42:20.791466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.798343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.001 [2024-12-05 19:42:20.798374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.001 [2024-12-05 19:42:20.798385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:24:02.001 [2024-12-05 19:42:20.798393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.822100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.001 [2024-12-05 19:42:20.822158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:02.001 [2024-12-05 19:42:20.822170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.639 ms 00:24:02.001 [2024-12-05 19:42:20.822177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.836433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.001 [2024-12-05 19:42:20.836642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:02.001 [2024-12-05 19:42:20.836660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.212 ms 00:24:02.001 [2024-12-05 19:42:20.836668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.836821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.001 [2024-12-05 19:42:20.836832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:02.001 [2024-12-05 19:42:20.836847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:24:02.001 [2024-12-05 19:42:20.836854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.860494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.001 [2024-12-05 19:42:20.860540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:02.001 [2024-12-05 19:42:20.860552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.620 ms 00:24:02.001 [2024-12-05 19:42:20.860559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.884370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.001 [2024-12-05 19:42:20.884419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:02.001 [2024-12-05 19:42:20.884430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.767 ms 00:24:02.001 [2024-12-05 19:42:20.884437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.907481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.001 [2024-12-05 19:42:20.907723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:02.001 [2024-12-05 19:42:20.907739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.998 ms 00:24:02.001 [2024-12-05 19:42:20.907747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.930758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.001 [2024-12-05 19:42:20.930942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:02.001 [2024-12-05 19:42:20.930994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.937 ms 00:24:02.001 [2024-12-05 19:42:20.931015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.001 [2024-12-05 19:42:20.931069] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:02.001 [2024-12-05 19:42:20.931099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.931956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:02.001 [2024-12-05 19:42:20.932877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.932905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.932976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.933997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:02.002 [2024-12-05 19:42:20.934197] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:02.002 [2024-12-05 19:42:20.934206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b576645-d761-45ff-acc8-8625a1d5c445 00:24:02.002 [2024-12-05 19:42:20.934213] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:02.002 [2024-12-05 19:42:20.934220] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:02.002 [2024-12-05 19:42:20.934228] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:02.002 [2024-12-05 19:42:20.934236] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:02.002 [2024-12-05 19:42:20.934242] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:02.002 [2024-12-05 19:42:20.934250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:02.002 [2024-12-05 19:42:20.934260] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:02.002 [2024-12-05 19:42:20.934266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:02.002 [2024-12-05 19:42:20.934273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:02.002 [2024-12-05 19:42:20.934281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.003 [2024-12-05 19:42:20.934288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:02.003 [2024-12-05 19:42:20.934297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.213 ms 00:24:02.003 [2024-12-05 19:42:20.934304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.003 [2024-12-05 19:42:20.947116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.003 [2024-12-05 19:42:20.947293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:02.003 [2024-12-05 19:42:20.947343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.765 ms 00:24:02.003 [2024-12-05 19:42:20.947365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.003 [2024-12-05 19:42:20.947820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.003 [2024-12-05 19:42:20.947892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:02.003 [2024-12-05 19:42:20.947942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:24:02.003 [2024-12-05 19:42:20.947964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.003 [2024-12-05 19:42:20.982607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.003 [2024-12-05 19:42:20.982772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.003 [2024-12-05 19:42:20.982823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.003 [2024-12-05 19:42:20.982849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.003 [2024-12-05 19:42:20.982948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.003 [2024-12-05 19:42:20.982969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.003 [2024-12-05 19:42:20.982988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.003 [2024-12-05 19:42:20.983006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.003 [2024-12-05 19:42:20.983064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.003 [2024-12-05 19:42:20.983531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.003 [2024-12-05 19:42:20.983616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.003 [2024-12-05 19:42:20.983675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.003 [2024-12-05 19:42:20.983730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.003 [2024-12-05 19:42:20.983784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.003 [2024-12-05 19:42:20.983806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.003 [2024-12-05 19:42:20.983825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.262 [2024-12-05 19:42:21.059572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.262 [2024-12-05 19:42:21.059787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.262 [2024-12-05 19:42:21.059864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.262 [2024-12-05 19:42:21.059897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.262 [2024-12-05 19:42:21.123386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.262 [2024-12-05 19:42:21.123587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.262 [2024-12-05 19:42:21.123636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.262 [2024-12-05 19:42:21.123658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.262 [2024-12-05 19:42:21.123724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.262 [2024-12-05 19:42:21.123746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.262 [2024-12-05 19:42:21.123765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.262 [2024-12-05 19:42:21.123783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.262 [2024-12-05 19:42:21.123821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.262 [2024-12-05 19:42:21.123848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.262 [2024-12-05 19:42:21.123867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.262 [2024-12-05 19:42:21.123936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.262 [2024-12-05 19:42:21.124045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.262 [2024-12-05 19:42:21.124068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.262 [2024-12-05 19:42:21.124087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.262 [2024-12-05 19:42:21.124105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.262 [2024-12-05 19:42:21.124168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.262 [2024-12-05 19:42:21.124194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:02.262 [2024-12-05 19:42:21.124219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.262 [2024-12-05 19:42:21.124238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.262 [2024-12-05 19:42:21.124333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.262 [2024-12-05 19:42:21.124357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.262 [2024-12-05 19:42:21.124377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.262 [2024-12-05 19:42:21.124395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.262 [2024-12-05 19:42:21.124448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.262 [2024-12-05 19:42:21.124547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.262 [2024-12-05 19:42:21.124565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.262 [2024-12-05 19:42:21.124583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.262 [2024-12-05 19:42:21.124722] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.469 ms, result 0 00:24:02.827 00:24:02.827 00:24:03.085 19:42:21 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=77032 00:24:03.085 19:42:21 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:24:03.085 19:42:21 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 77032 00:24:03.085 19:42:21 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 77032 ']' 00:24:03.085 19:42:21 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.085 19:42:21 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.085 19:42:21 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.085 19:42:21 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.085 19:42:21 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:03.085 [2024-12-05 19:42:21.925217] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:24:03.085 [2024-12-05 19:42:21.925543] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77032 ] 00:24:03.085 [2024-12-05 19:42:22.087298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.342 [2024-12-05 19:42:22.187212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.908 19:42:22 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.908 19:42:22 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:24:03.908 19:42:22 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:24:04.166 [2024-12-05 19:42:23.115369] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:04.166 [2024-12-05 19:42:23.115444] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:04.425 [2024-12-05 19:42:23.285556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.285614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:04.425 [2024-12-05 19:42:23.285630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:04.425 [2024-12-05 19:42:23.285639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.288442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.288481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:04.425 [2024-12-05 19:42:23.288493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.783 ms 00:24:04.425 [2024-12-05 19:42:23.288501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.288650] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:04.425 [2024-12-05 19:42:23.289359] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:04.425 [2024-12-05 19:42:23.289497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.289507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:04.425 [2024-12-05 19:42:23.289518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.861 ms 00:24:04.425 [2024-12-05 19:42:23.289526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.290906] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:04.425 [2024-12-05 19:42:23.303384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.303445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:04.425 [2024-12-05 19:42:23.303459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.480 ms 00:24:04.425 [2024-12-05 19:42:23.303470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.303581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.303594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:04.425 [2024-12-05 19:42:23.303603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:04.425 [2024-12-05 19:42:23.303612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.308794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.308843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:04.425 [2024-12-05 19:42:23.308853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.131 ms 00:24:04.425 [2024-12-05 19:42:23.308862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.308984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.308997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:04.425 [2024-12-05 19:42:23.309005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:04.425 [2024-12-05 19:42:23.309018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.309048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.309057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:04.425 [2024-12-05 19:42:23.309065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:04.425 [2024-12-05 19:42:23.309074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.309099] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:04.425 [2024-12-05 19:42:23.312584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.312617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:04.425 [2024-12-05 19:42:23.312628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.489 ms 00:24:04.425 [2024-12-05 19:42:23.312637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.312681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.312689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:04.425 [2024-12-05 19:42:23.312699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:04.425 [2024-12-05 19:42:23.312708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.312729] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:04.425 [2024-12-05 19:42:23.312747] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:04.425 [2024-12-05 19:42:23.312790] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:04.425 [2024-12-05 19:42:23.312805] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:04.425 [2024-12-05 19:42:23.312911] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:04.425 [2024-12-05 19:42:23.312921] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:04.425 [2024-12-05 19:42:23.312935] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:04.425 [2024-12-05 19:42:23.312945] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:04.425 [2024-12-05 19:42:23.312956] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:04.425 [2024-12-05 19:42:23.312964] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:04.425 [2024-12-05 19:42:23.312973] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:04.425 [2024-12-05 19:42:23.312980] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:04.425 [2024-12-05 19:42:23.312990] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:04.425 [2024-12-05 19:42:23.312998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.313006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:04.425 [2024-12-05 19:42:23.313014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:24:04.425 [2024-12-05 19:42:23.313023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.313143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.425 [2024-12-05 19:42:23.313156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:04.425 [2024-12-05 19:42:23.313164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:04.425 [2024-12-05 19:42:23.313172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.425 [2024-12-05 19:42:23.313273] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:04.425 [2024-12-05 19:42:23.313284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:04.425 [2024-12-05 19:42:23.313292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:04.425 [2024-12-05 19:42:23.313302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.425 [2024-12-05 19:42:23.313310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:04.425 [2024-12-05 19:42:23.313320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:04.425 [2024-12-05 19:42:23.313327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:04.425 [2024-12-05 19:42:23.313338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:04.425 [2024-12-05 19:42:23.313345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:04.425 [2024-12-05 19:42:23.313353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:04.425 [2024-12-05 19:42:23.313360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:04.425 [2024-12-05 19:42:23.313368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:04.425 [2024-12-05 19:42:23.313374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:04.425 [2024-12-05 19:42:23.313382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:04.425 [2024-12-05 19:42:23.313389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:04.425 [2024-12-05 19:42:23.313396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.425 [2024-12-05 19:42:23.313403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:04.425 [2024-12-05 19:42:23.313410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:04.425 [2024-12-05 19:42:23.313422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.425 [2024-12-05 19:42:23.313430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:04.426 [2024-12-05 19:42:23.313437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:04.426 [2024-12-05 19:42:23.313444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.426 [2024-12-05 19:42:23.313451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:04.426 [2024-12-05 19:42:23.313460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:04.426 [2024-12-05 19:42:23.313467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.426 [2024-12-05 19:42:23.313474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:04.426 [2024-12-05 19:42:23.313482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:04.426 [2024-12-05 19:42:23.313490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.426 [2024-12-05 19:42:23.313497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:04.426 [2024-12-05 19:42:23.313506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:04.426 [2024-12-05 19:42:23.313513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:04.426 [2024-12-05 19:42:23.313520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:04.426 [2024-12-05 19:42:23.313527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:04.426 [2024-12-05 19:42:23.313535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:04.426 [2024-12-05 19:42:23.313541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:04.426 [2024-12-05 19:42:23.313550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:04.426 [2024-12-05 19:42:23.313556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:04.426 [2024-12-05 19:42:23.313564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:04.426 [2024-12-05 19:42:23.313570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:04.426 [2024-12-05 19:42:23.313579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.426 [2024-12-05 19:42:23.313586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:04.426 [2024-12-05 19:42:23.313594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:04.426 [2024-12-05 19:42:23.313601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.426 [2024-12-05 19:42:23.313610] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:04.426 [2024-12-05 19:42:23.313619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:04.426 [2024-12-05 19:42:23.313627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:04.426 [2024-12-05 19:42:23.313635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:04.426 [2024-12-05 19:42:23.313644] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:04.426 [2024-12-05 19:42:23.313651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:04.426 [2024-12-05 19:42:23.313658] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:04.426 [2024-12-05 19:42:23.313665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:04.426 [2024-12-05 19:42:23.313673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:04.426 [2024-12-05 19:42:23.313679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:04.426 [2024-12-05 19:42:23.313689] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:04.426 [2024-12-05 19:42:23.313697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:04.426 [2024-12-05 19:42:23.313710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:04.426 [2024-12-05 19:42:23.313717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:04.426 [2024-12-05 19:42:23.313725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:04.426 [2024-12-05 19:42:23.313733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:04.426 [2024-12-05 19:42:23.313741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:04.426 [2024-12-05 19:42:23.313748] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:04.426 [2024-12-05 19:42:23.313757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:04.426 [2024-12-05 19:42:23.313764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:04.426 [2024-12-05 19:42:23.313772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:04.426 [2024-12-05 19:42:23.313780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:04.426 [2024-12-05 19:42:23.313789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:04.426 [2024-12-05 19:42:23.313796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:04.426 [2024-12-05 19:42:23.313804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:04.426 [2024-12-05 19:42:23.313811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:04.426 [2024-12-05 19:42:23.313820] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:04.426 [2024-12-05 19:42:23.313828] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:04.426 [2024-12-05 19:42:23.313839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:04.426 [2024-12-05 19:42:23.313846] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:04.426 [2024-12-05 19:42:23.313855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:04.426 [2024-12-05 19:42:23.313863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:04.426 [2024-12-05 19:42:23.313872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.426 [2024-12-05 19:42:23.313879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:04.426 [2024-12-05 19:42:23.313888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:24:04.426 [2024-12-05 19:42:23.313896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.426 [2024-12-05 19:42:23.339520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.426 [2024-12-05 19:42:23.339744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:04.426 [2024-12-05 19:42:23.339765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.553 ms 00:24:04.426 [2024-12-05 19:42:23.339777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.426 [2024-12-05 19:42:23.339924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.426 [2024-12-05 19:42:23.339934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:04.426 [2024-12-05 19:42:23.339944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:04.426 [2024-12-05 19:42:23.339952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.426 [2024-12-05 19:42:23.370408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.426 [2024-12-05 19:42:23.370625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:04.426 [2024-12-05 19:42:23.370645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.431 ms 00:24:04.426 [2024-12-05 19:42:23.370654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.426 [2024-12-05 19:42:23.370739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.426 [2024-12-05 19:42:23.370748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:04.426 [2024-12-05 19:42:23.370758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:04.426 [2024-12-05 19:42:23.370765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.426 [2024-12-05 19:42:23.371091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.426 [2024-12-05 19:42:23.371106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:04.426 [2024-12-05 19:42:23.371119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:24:04.426 [2024-12-05 19:42:23.371151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.426 [2024-12-05 19:42:23.371282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.426 [2024-12-05 19:42:23.371291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:04.426 [2024-12-05 19:42:23.371300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:04.426 [2024-12-05 19:42:23.371308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.426 [2024-12-05 19:42:23.385474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.426 [2024-12-05 19:42:23.385668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:04.426 [2024-12-05 19:42:23.385689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.141 ms 00:24:04.426 [2024-12-05 19:42:23.385696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.426 [2024-12-05 19:42:23.407745] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:04.426 [2024-12-05 19:42:23.407811] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:04.426 [2024-12-05 19:42:23.407831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.426 [2024-12-05 19:42:23.407841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:04.426 [2024-12-05 19:42:23.407855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.993 ms 00:24:04.426 [2024-12-05 19:42:23.407871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.433234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.433294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:04.685 [2024-12-05 19:42:23.433309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.239 ms 00:24:04.685 [2024-12-05 19:42:23.433317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.445630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.445676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:04.685 [2024-12-05 19:42:23.445691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.198 ms 00:24:04.685 [2024-12-05 19:42:23.445699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.457283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.457463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:04.685 [2024-12-05 19:42:23.457486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.489 ms 00:24:04.685 [2024-12-05 19:42:23.457493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.458172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.458190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:04.685 [2024-12-05 19:42:23.458200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.564 ms 00:24:04.685 [2024-12-05 19:42:23.458208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.513611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.513669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:04.685 [2024-12-05 19:42:23.513684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.372 ms 00:24:04.685 [2024-12-05 19:42:23.513692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.524602] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:04.685 [2024-12-05 19:42:23.539277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.539329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:04.685 [2024-12-05 19:42:23.539344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.454 ms 00:24:04.685 [2024-12-05 19:42:23.539355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.539449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.539461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:04.685 [2024-12-05 19:42:23.539470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:04.685 [2024-12-05 19:42:23.539479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.539526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.539537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:04.685 [2024-12-05 19:42:23.539545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:04.685 [2024-12-05 19:42:23.539556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.539579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.539589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:04.685 [2024-12-05 19:42:23.539597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:04.685 [2024-12-05 19:42:23.539608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.539639] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:04.685 [2024-12-05 19:42:23.539652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.539661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:04.685 [2024-12-05 19:42:23.539670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:04.685 [2024-12-05 19:42:23.539677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.563204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.563263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:04.685 [2024-12-05 19:42:23.563277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.497 ms 00:24:04.685 [2024-12-05 19:42:23.563285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.563407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.685 [2024-12-05 19:42:23.563418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:04.685 [2024-12-05 19:42:23.563428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:24:04.685 [2024-12-05 19:42:23.563438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.685 [2024-12-05 19:42:23.564260] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:04.685 [2024-12-05 19:42:23.567667] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 278.414 ms, result 0 00:24:04.685 [2024-12-05 19:42:23.568579] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:04.686 Some configs were skipped because the RPC state that can call them passed over. 00:24:04.686 19:42:23 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:24:04.944 [2024-12-05 19:42:23.819307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:04.944 [2024-12-05 19:42:23.819490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:04.944 [2024-12-05 19:42:23.819551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:24:04.944 [2024-12-05 19:42:23.819577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:04.944 [2024-12-05 19:42:23.819724] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.642 ms, result 0 00:24:04.944 true 00:24:04.944 19:42:23 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:24:05.202 [2024-12-05 19:42:24.043355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:05.202 [2024-12-05 19:42:24.043513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:24:05.202 [2024-12-05 19:42:24.043638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.975 ms 00:24:05.202 [2024-12-05 19:42:24.043669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:05.202 [2024-12-05 19:42:24.043764] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.385 ms, result 0 00:24:05.202 true 00:24:05.202 19:42:24 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 77032 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77032 ']' 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77032 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77032 00:24:05.202 killing process with pid 77032 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77032' 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 77032 00:24:05.202 19:42:24 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 77032 00:24:06.134 [2024-12-05 19:42:24.781501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.781563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:06.134 [2024-12-05 19:42:24.781576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:06.134 [2024-12-05 19:42:24.781586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.781609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:06.134 [2024-12-05 19:42:24.784169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.784206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:06.134 [2024-12-05 19:42:24.784224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.542 ms 00:24:06.134 [2024-12-05 19:42:24.784231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.784513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.784569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:06.134 [2024-12-05 19:42:24.784582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:24:06.134 [2024-12-05 19:42:24.784589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.788620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.788717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:06.134 [2024-12-05 19:42:24.788785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.008 ms 00:24:06.134 [2024-12-05 19:42:24.788808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.795806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.795975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:06.134 [2024-12-05 19:42:24.796039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.912 ms 00:24:06.134 [2024-12-05 19:42:24.796062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.805565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.805761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:06.134 [2024-12-05 19:42:24.805822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.407 ms 00:24:06.134 [2024-12-05 19:42:24.805849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.813296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.813479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:06.134 [2024-12-05 19:42:24.813537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.366 ms 00:24:06.134 [2024-12-05 19:42:24.813560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.813720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.813746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:06.134 [2024-12-05 19:42:24.813768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:24:06.134 [2024-12-05 19:42:24.813813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.823464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.823636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:06.134 [2024-12-05 19:42:24.823688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.572 ms 00:24:06.134 [2024-12-05 19:42:24.823709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.832867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.833029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:06.134 [2024-12-05 19:42:24.833090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.098 ms 00:24:06.134 [2024-12-05 19:42:24.833111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.842484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.842636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:06.134 [2024-12-05 19:42:24.842692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.278 ms 00:24:06.134 [2024-12-05 19:42:24.842713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.851885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.134 [2024-12-05 19:42:24.852044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:06.134 [2024-12-05 19:42:24.852100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.085 ms 00:24:06.134 [2024-12-05 19:42:24.852122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.134 [2024-12-05 19:42:24.852193] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:06.134 [2024-12-05 19:42:24.852272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.852947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.853966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.854020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.854052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:06.134 [2024-12-05 19:42:24.854081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.854954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:06.135 [2024-12-05 19:42:24.855437] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:06.135 [2024-12-05 19:42:24.855451] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b576645-d761-45ff-acc8-8625a1d5c445 00:24:06.135 [2024-12-05 19:42:24.855461] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:06.135 [2024-12-05 19:42:24.855470] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:06.135 [2024-12-05 19:42:24.855477] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:06.135 [2024-12-05 19:42:24.855487] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:06.135 [2024-12-05 19:42:24.855494] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:06.135 [2024-12-05 19:42:24.855503] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:06.135 [2024-12-05 19:42:24.855510] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:06.135 [2024-12-05 19:42:24.855519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:06.135 [2024-12-05 19:42:24.855525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:06.135 [2024-12-05 19:42:24.855535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.135 [2024-12-05 19:42:24.855542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:06.135 [2024-12-05 19:42:24.855553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.343 ms 00:24:06.136 [2024-12-05 19:42:24.855560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:24.868192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.136 [2024-12-05 19:42:24.868380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:06.136 [2024-12-05 19:42:24.868439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.557 ms 00:24:06.136 [2024-12-05 19:42:24.868461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:24.868868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:06.136 [2024-12-05 19:42:24.868952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:06.136 [2024-12-05 19:42:24.869007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:24:06.136 [2024-12-05 19:42:24.869029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:24.912654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:24.912854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:06.136 [2024-12-05 19:42:24.912911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:24.912934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:24.913076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:24.913101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:06.136 [2024-12-05 19:42:24.913136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:24.913156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:24.913281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:24.913309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:06.136 [2024-12-05 19:42:24.913332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:24.913350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:24.913381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:24.913401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:06.136 [2024-12-05 19:42:24.913521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:24.913541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:24.990234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:24.990413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:06.136 [2024-12-05 19:42:24.990433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:24.990441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:25.052137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:25.052183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:06.136 [2024-12-05 19:42:25.052194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:25.052203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:25.052285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:25.052294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:06.136 [2024-12-05 19:42:25.052317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:25.052323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:25.052350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:25.052358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:06.136 [2024-12-05 19:42:25.052366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:25.052373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:25.052455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:25.052463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:06.136 [2024-12-05 19:42:25.052471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:25.052477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:25.052505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:25.052513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:06.136 [2024-12-05 19:42:25.052521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:25.052528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:25.052561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:25.052568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:06.136 [2024-12-05 19:42:25.052577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:25.052584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:25.052622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:06.136 [2024-12-05 19:42:25.052630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:06.136 [2024-12-05 19:42:25.052638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:06.136 [2024-12-05 19:42:25.052644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:06.136 [2024-12-05 19:42:25.052755] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 271.237 ms, result 0 00:24:06.701 19:42:25 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:06.701 [2024-12-05 19:42:25.656541] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:24:06.701 [2024-12-05 19:42:25.656650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77085 ] 00:24:06.964 [2024-12-05 19:42:25.804524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.964 [2024-12-05 19:42:25.891149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.222 [2024-12-05 19:42:26.112236] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.222 [2024-12-05 19:42:26.112296] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:07.481 [2024-12-05 19:42:26.261353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.481 [2024-12-05 19:42:26.261411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:07.481 [2024-12-05 19:42:26.261422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:07.481 [2024-12-05 19:42:26.261429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.481 [2024-12-05 19:42:26.263722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.481 [2024-12-05 19:42:26.263768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:07.481 [2024-12-05 19:42:26.263778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:24:07.481 [2024-12-05 19:42:26.263784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.481 [2024-12-05 19:42:26.263877] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:07.481 [2024-12-05 19:42:26.264692] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:07.481 [2024-12-05 19:42:26.264735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.481 [2024-12-05 19:42:26.264743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:07.481 [2024-12-05 19:42:26.264752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:24:07.481 [2024-12-05 19:42:26.264759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.481 [2024-12-05 19:42:26.265942] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:07.481 [2024-12-05 19:42:26.276467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.481 [2024-12-05 19:42:26.276521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:07.481 [2024-12-05 19:42:26.276533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.524 ms 00:24:07.481 [2024-12-05 19:42:26.276539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.481 [2024-12-05 19:42:26.276656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.481 [2024-12-05 19:42:26.276667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:07.481 [2024-12-05 19:42:26.276674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:07.481 [2024-12-05 19:42:26.276680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.481 [2024-12-05 19:42:26.282173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.481 [2024-12-05 19:42:26.282214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:07.481 [2024-12-05 19:42:26.282224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.451 ms 00:24:07.481 [2024-12-05 19:42:26.282231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.481 [2024-12-05 19:42:26.282343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.481 [2024-12-05 19:42:26.282351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:07.481 [2024-12-05 19:42:26.282358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:07.481 [2024-12-05 19:42:26.282365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.482 [2024-12-05 19:42:26.282389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.482 [2024-12-05 19:42:26.282396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:07.482 [2024-12-05 19:42:26.282403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:07.482 [2024-12-05 19:42:26.282409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.482 [2024-12-05 19:42:26.282428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:24:07.482 [2024-12-05 19:42:26.285469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.482 [2024-12-05 19:42:26.285503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:07.482 [2024-12-05 19:42:26.285512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:24:07.482 [2024-12-05 19:42:26.285518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.482 [2024-12-05 19:42:26.285562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.482 [2024-12-05 19:42:26.285570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:07.482 [2024-12-05 19:42:26.285577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:07.482 [2024-12-05 19:42:26.285584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.482 [2024-12-05 19:42:26.285601] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:07.482 [2024-12-05 19:42:26.285618] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:07.482 [2024-12-05 19:42:26.285645] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:07.482 [2024-12-05 19:42:26.285658] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:07.482 [2024-12-05 19:42:26.285740] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:07.482 [2024-12-05 19:42:26.285748] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:07.482 [2024-12-05 19:42:26.285757] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:07.482 [2024-12-05 19:42:26.285768] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:07.482 [2024-12-05 19:42:26.285775] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:07.482 [2024-12-05 19:42:26.285782] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:24:07.482 [2024-12-05 19:42:26.285788] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:07.482 [2024-12-05 19:42:26.285794] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:07.482 [2024-12-05 19:42:26.285800] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:07.482 [2024-12-05 19:42:26.285806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.482 [2024-12-05 19:42:26.285812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:07.482 [2024-12-05 19:42:26.285819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:24:07.482 [2024-12-05 19:42:26.285824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.482 [2024-12-05 19:42:26.285894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.482 [2024-12-05 19:42:26.285902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:07.482 [2024-12-05 19:42:26.285909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:07.482 [2024-12-05 19:42:26.285915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.482 [2024-12-05 19:42:26.286008] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:07.482 [2024-12-05 19:42:26.286017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:07.482 [2024-12-05 19:42:26.286023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.482 [2024-12-05 19:42:26.286029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:07.482 [2024-12-05 19:42:26.286040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:24:07.482 [2024-12-05 19:42:26.286052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:07.482 [2024-12-05 19:42:26.286057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.482 [2024-12-05 19:42:26.286068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:07.482 [2024-12-05 19:42:26.286079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:24:07.482 [2024-12-05 19:42:26.286084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:07.482 [2024-12-05 19:42:26.286089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:07.482 [2024-12-05 19:42:26.286094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:24:07.482 [2024-12-05 19:42:26.286100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:07.482 [2024-12-05 19:42:26.286112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:24:07.482 [2024-12-05 19:42:26.286117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:07.482 [2024-12-05 19:42:26.286144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.482 [2024-12-05 19:42:26.286156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:07.482 [2024-12-05 19:42:26.286161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.482 [2024-12-05 19:42:26.286172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:07.482 [2024-12-05 19:42:26.286178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.482 [2024-12-05 19:42:26.286188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:07.482 [2024-12-05 19:42:26.286194] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:07.482 [2024-12-05 19:42:26.286205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:07.482 [2024-12-05 19:42:26.286210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.482 [2024-12-05 19:42:26.286220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:07.482 [2024-12-05 19:42:26.286225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:24:07.482 [2024-12-05 19:42:26.286231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:07.482 [2024-12-05 19:42:26.286236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:07.482 [2024-12-05 19:42:26.286241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:24:07.482 [2024-12-05 19:42:26.286246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:07.482 [2024-12-05 19:42:26.286262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:24:07.482 [2024-12-05 19:42:26.286267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286272] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:07.482 [2024-12-05 19:42:26.286278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:07.482 [2024-12-05 19:42:26.286286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:07.482 [2024-12-05 19:42:26.286291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:07.482 [2024-12-05 19:42:26.286297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:07.482 [2024-12-05 19:42:26.286303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:07.482 [2024-12-05 19:42:26.286309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:07.482 [2024-12-05 19:42:26.286314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:07.482 [2024-12-05 19:42:26.286319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:07.482 [2024-12-05 19:42:26.286324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:07.482 [2024-12-05 19:42:26.286331] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:07.482 [2024-12-05 19:42:26.286338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.482 [2024-12-05 19:42:26.286345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:24:07.482 [2024-12-05 19:42:26.286351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:24:07.482 [2024-12-05 19:42:26.286356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:24:07.482 [2024-12-05 19:42:26.286362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:24:07.482 [2024-12-05 19:42:26.286368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:24:07.482 [2024-12-05 19:42:26.286373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:24:07.482 [2024-12-05 19:42:26.286379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:24:07.482 [2024-12-05 19:42:26.286384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:24:07.482 [2024-12-05 19:42:26.286390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:24:07.482 [2024-12-05 19:42:26.286396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:24:07.482 [2024-12-05 19:42:26.286402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:24:07.483 [2024-12-05 19:42:26.286407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:24:07.483 [2024-12-05 19:42:26.286412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:24:07.483 [2024-12-05 19:42:26.286418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:24:07.483 [2024-12-05 19:42:26.286423] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:07.483 [2024-12-05 19:42:26.286430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:07.483 [2024-12-05 19:42:26.286436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:07.483 [2024-12-05 19:42:26.286441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:07.483 [2024-12-05 19:42:26.286447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:07.483 [2024-12-05 19:42:26.286452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:07.483 [2024-12-05 19:42:26.286457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.286466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:07.483 [2024-12-05 19:42:26.286471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:24:07.483 [2024-12-05 19:42:26.286477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.308579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.308627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:07.483 [2024-12-05 19:42:26.308637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.034 ms 00:24:07.483 [2024-12-05 19:42:26.308643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.308778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.308786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:07.483 [2024-12-05 19:42:26.308793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:07.483 [2024-12-05 19:42:26.308799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.351003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.351056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:07.483 [2024-12-05 19:42:26.351071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.184 ms 00:24:07.483 [2024-12-05 19:42:26.351077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.351188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.351198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:07.483 [2024-12-05 19:42:26.351206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:07.483 [2024-12-05 19:42:26.351213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.351538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.351552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:07.483 [2024-12-05 19:42:26.351560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:24:07.483 [2024-12-05 19:42:26.351569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.351685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.351693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:07.483 [2024-12-05 19:42:26.351700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:24:07.483 [2024-12-05 19:42:26.351706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.362951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.362997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:07.483 [2024-12-05 19:42:26.363007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.227 ms 00:24:07.483 [2024-12-05 19:42:26.363013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.373450] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:07.483 [2024-12-05 19:42:26.373661] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:07.483 [2024-12-05 19:42:26.373676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.373684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:07.483 [2024-12-05 19:42:26.373692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.522 ms 00:24:07.483 [2024-12-05 19:42:26.373699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.393428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.393488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:07.483 [2024-12-05 19:42:26.393499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.634 ms 00:24:07.483 [2024-12-05 19:42:26.393505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.403653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.403706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:07.483 [2024-12-05 19:42:26.403718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.032 ms 00:24:07.483 [2024-12-05 19:42:26.403724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.413452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.413505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:07.483 [2024-12-05 19:42:26.413516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.637 ms 00:24:07.483 [2024-12-05 19:42:26.413522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.414088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.414106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:07.483 [2024-12-05 19:42:26.414114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:24:07.483 [2024-12-05 19:42:26.414120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.461235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.461294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:07.483 [2024-12-05 19:42:26.461305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.068 ms 00:24:07.483 [2024-12-05 19:42:26.461313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.470267] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:24:07.483 [2024-12-05 19:42:26.483482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.483532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:07.483 [2024-12-05 19:42:26.483543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.060 ms 00:24:07.483 [2024-12-05 19:42:26.483554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.483655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.483665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:07.483 [2024-12-05 19:42:26.483672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:07.483 [2024-12-05 19:42:26.483678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.483719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.483727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:07.483 [2024-12-05 19:42:26.483734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:24:07.483 [2024-12-05 19:42:26.483743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.483766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.483773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:07.483 [2024-12-05 19:42:26.483779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:07.483 [2024-12-05 19:42:26.483785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.483 [2024-12-05 19:42:26.483813] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:07.483 [2024-12-05 19:42:26.483820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.483 [2024-12-05 19:42:26.483826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:07.483 [2024-12-05 19:42:26.483832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:07.483 [2024-12-05 19:42:26.483838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.741 [2024-12-05 19:42:26.503759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.741 [2024-12-05 19:42:26.503826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:07.741 [2024-12-05 19:42:26.503837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.903 ms 00:24:07.741 [2024-12-05 19:42:26.503843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.741 [2024-12-05 19:42:26.503950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:07.741 [2024-12-05 19:42:26.503959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:07.741 [2024-12-05 19:42:26.503966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:07.741 [2024-12-05 19:42:26.503972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:07.741 [2024-12-05 19:42:26.505090] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:07.741 [2024-12-05 19:42:26.508261] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.491 ms, result 0 00:24:07.741 [2024-12-05 19:42:26.509004] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:07.741 [2024-12-05 19:42:26.520750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:08.672  [2024-12-05T19:42:28.611Z] Copying: 42/256 [MB] (42 MBps) [2024-12-05T19:42:29.986Z] Copying: 87/256 [MB] (44 MBps) [2024-12-05T19:42:30.919Z] Copying: 130/256 [MB] (43 MBps) [2024-12-05T19:42:31.860Z] Copying: 175/256 [MB] (44 MBps) [2024-12-05T19:42:32.795Z] Copying: 218/256 [MB] (42 MBps) [2024-12-05T19:42:32.795Z] Copying: 256/256 [MB] (average 43 MBps)[2024-12-05 19:42:32.618828] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:13.789 [2024-12-05 19:42:32.629290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.789 [2024-12-05 19:42:32.629532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:13.789 [2024-12-05 19:42:32.629616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:13.789 [2024-12-05 19:42:32.629640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.789 [2024-12-05 19:42:32.629682] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:24:13.789 [2024-12-05 19:42:32.632357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.789 [2024-12-05 19:42:32.632509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:13.789 [2024-12-05 19:42:32.632575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:24:13.789 [2024-12-05 19:42:32.632599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.789 [2024-12-05 19:42:32.632893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.789 [2024-12-05 19:42:32.632927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:13.789 [2024-12-05 19:42:32.632948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:24:13.789 [2024-12-05 19:42:32.633011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.789 [2024-12-05 19:42:32.636891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.789 [2024-12-05 19:42:32.637012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:13.789 [2024-12-05 19:42:32.637089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.842 ms 00:24:13.789 [2024-12-05 19:42:32.637111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.789 [2024-12-05 19:42:32.644641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.790 [2024-12-05 19:42:32.644845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:13.790 [2024-12-05 19:42:32.644902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.486 ms 00:24:13.790 [2024-12-05 19:42:32.644926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.790 [2024-12-05 19:42:32.671865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.790 [2024-12-05 19:42:32.672047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:13.790 [2024-12-05 19:42:32.672178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.844 ms 00:24:13.790 [2024-12-05 19:42:32.672216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.790 [2024-12-05 19:42:32.688165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.790 [2024-12-05 19:42:32.688374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:13.790 [2024-12-05 19:42:32.688439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.862 ms 00:24:13.790 [2024-12-05 19:42:32.688461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.790 [2024-12-05 19:42:32.688637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.790 [2024-12-05 19:42:32.688664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:13.790 [2024-12-05 19:42:32.688691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:24:13.790 [2024-12-05 19:42:32.688710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.790 [2024-12-05 19:42:32.717192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.790 [2024-12-05 19:42:32.717416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:13.790 [2024-12-05 19:42:32.717474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.450 ms 00:24:13.790 [2024-12-05 19:42:32.717497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.790 [2024-12-05 19:42:32.747630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.790 [2024-12-05 19:42:32.747818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:13.790 [2024-12-05 19:42:32.747836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.048 ms 00:24:13.790 [2024-12-05 19:42:32.747844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:13.790 [2024-12-05 19:42:32.773903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:13.790 [2024-12-05 19:42:32.774148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:13.790 [2024-12-05 19:42:32.774278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.985 ms 00:24:13.790 [2024-12-05 19:42:32.774308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.050 [2024-12-05 19:42:32.799150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.050 [2024-12-05 19:42:32.799331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:14.050 [2024-12-05 19:42:32.799414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.725 ms 00:24:14.050 [2024-12-05 19:42:32.799440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.051 [2024-12-05 19:42:32.799558] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:14.051 [2024-12-05 19:42:32.799606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.799754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.799789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.799856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.799918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.799948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.800996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.801004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.801011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.801019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.801026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.801034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.801041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:14.051 [2024-12-05 19:42:32.801048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:14.052 [2024-12-05 19:42:32.801312] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:14.052 [2024-12-05 19:42:32.801320] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5b576645-d761-45ff-acc8-8625a1d5c445 00:24:14.052 [2024-12-05 19:42:32.801328] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:14.052 [2024-12-05 19:42:32.801335] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:14.052 [2024-12-05 19:42:32.801343] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:14.052 [2024-12-05 19:42:32.801351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:14.052 [2024-12-05 19:42:32.801358] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:14.052 [2024-12-05 19:42:32.801365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:14.052 [2024-12-05 19:42:32.801376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:14.052 [2024-12-05 19:42:32.801383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:14.052 [2024-12-05 19:42:32.801389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:14.052 [2024-12-05 19:42:32.801397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.052 [2024-12-05 19:42:32.801405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:14.052 [2024-12-05 19:42:32.801413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.842 ms 00:24:14.052 [2024-12-05 19:42:32.801420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:32.814704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.052 [2024-12-05 19:42:32.814908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:14.052 [2024-12-05 19:42:32.814968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.233 ms 00:24:14.052 [2024-12-05 19:42:32.814992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:32.815413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:14.052 [2024-12-05 19:42:32.815454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:14.052 [2024-12-05 19:42:32.815514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:24:14.052 [2024-12-05 19:42:32.815536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:32.850240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:32.850437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:14.052 [2024-12-05 19:42:32.850493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:32.850522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:32.850663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:32.850690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:14.052 [2024-12-05 19:42:32.850748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:32.850770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:32.850831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:32.850907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:14.052 [2024-12-05 19:42:32.850928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:32.850978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:32.851015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:32.851040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:14.052 [2024-12-05 19:42:32.851113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:32.851160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:32.931316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:32.931530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:14.052 [2024-12-05 19:42:32.931549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:32.931557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:33.002405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:33.002461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:14.052 [2024-12-05 19:42:33.002473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:33.002481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:33.002555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:33.002565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:14.052 [2024-12-05 19:42:33.002573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:33.002580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:33.002608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:33.002622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:14.052 [2024-12-05 19:42:33.002630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:33.002637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:33.002721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:33.002731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:14.052 [2024-12-05 19:42:33.002739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:33.002746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:33.002776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:33.002785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:14.052 [2024-12-05 19:42:33.002795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:33.002802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:33.002838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:33.002847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:14.052 [2024-12-05 19:42:33.002854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:33.002861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:33.002902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:14.052 [2024-12-05 19:42:33.002916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:14.052 [2024-12-05 19:42:33.002924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:14.052 [2024-12-05 19:42:33.002931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:14.052 [2024-12-05 19:42:33.003057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 373.776 ms, result 0 00:24:14.986 00:24:14.986 00:24:14.986 19:42:33 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:15.243 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:24:15.243 19:42:34 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:24:15.243 19:42:34 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:24:15.243 19:42:34 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:15.243 19:42:34 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:15.243 19:42:34 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:24:15.502 19:42:34 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:24:15.502 19:42:34 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 77032 00:24:15.502 19:42:34 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 77032 ']' 00:24:15.502 19:42:34 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 77032 00:24:15.502 Process with pid 77032 is not found 00:24:15.502 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77032) - No such process 00:24:15.502 19:42:34 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 77032 is not found' 00:24:15.502 00:24:15.502 real 1m2.992s 00:24:15.502 user 1m41.103s 00:24:15.502 sys 0m5.797s 00:24:15.502 19:42:34 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.502 19:42:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:24:15.502 ************************************ 00:24:15.502 END TEST ftl_trim 00:24:15.502 ************************************ 00:24:15.502 19:42:34 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:15.502 19:42:34 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:15.502 19:42:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.502 19:42:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:24:15.502 ************************************ 00:24:15.502 START TEST ftl_restore 00:24:15.502 ************************************ 00:24:15.502 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:24:15.502 * Looking for test storage... 00:24:15.502 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:24:15.502 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:15.502 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:15.502 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:24:15.502 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:15.502 19:42:34 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.502 19:42:34 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.502 19:42:34 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.502 19:42:34 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.503 19:42:34 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:15.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.503 --rc genhtml_branch_coverage=1 00:24:15.503 --rc genhtml_function_coverage=1 00:24:15.503 --rc genhtml_legend=1 00:24:15.503 --rc geninfo_all_blocks=1 00:24:15.503 --rc geninfo_unexecuted_blocks=1 00:24:15.503 00:24:15.503 ' 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:15.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.503 --rc genhtml_branch_coverage=1 00:24:15.503 --rc genhtml_function_coverage=1 00:24:15.503 --rc genhtml_legend=1 00:24:15.503 --rc geninfo_all_blocks=1 00:24:15.503 --rc geninfo_unexecuted_blocks=1 00:24:15.503 00:24:15.503 ' 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:15.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.503 --rc genhtml_branch_coverage=1 00:24:15.503 --rc genhtml_function_coverage=1 00:24:15.503 --rc genhtml_legend=1 00:24:15.503 --rc geninfo_all_blocks=1 00:24:15.503 --rc geninfo_unexecuted_blocks=1 00:24:15.503 00:24:15.503 ' 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:15.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.503 --rc genhtml_branch_coverage=1 00:24:15.503 --rc genhtml_function_coverage=1 00:24:15.503 --rc genhtml_legend=1 00:24:15.503 --rc geninfo_all_blocks=1 00:24:15.503 --rc geninfo_unexecuted_blocks=1 00:24:15.503 00:24:15.503 ' 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.m9lWcFzNPq 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=77240 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 77240 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 77240 ']' 00:24:15.503 19:42:34 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.503 19:42:34 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:24:15.761 [2024-12-05 19:42:34.576607] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:24:15.761 [2024-12-05 19:42:34.576727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77240 ] 00:24:15.761 [2024-12-05 19:42:34.733206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.019 [2024-12-05 19:42:34.822577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.589 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.589 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:24:16.589 19:42:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:24:16.589 19:42:35 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:24:16.589 19:42:35 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:24:16.589 19:42:35 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:24:16.589 19:42:35 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:24:16.589 19:42:35 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:24:16.845 19:42:35 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:24:16.845 19:42:35 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:24:16.846 19:42:35 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:24:16.846 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:24:16.846 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:16.846 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:16.846 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:16.846 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:24:17.104 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:17.104 { 00:24:17.104 "name": "nvme0n1", 00:24:17.104 "aliases": [ 00:24:17.104 "b7ef9ee0-9b4b-4c0d-8eaa-8ef46ac3b893" 00:24:17.104 ], 00:24:17.104 "product_name": "NVMe disk", 00:24:17.104 "block_size": 4096, 00:24:17.104 "num_blocks": 1310720, 00:24:17.104 "uuid": "b7ef9ee0-9b4b-4c0d-8eaa-8ef46ac3b893", 00:24:17.104 "numa_id": -1, 00:24:17.104 "assigned_rate_limits": { 00:24:17.104 "rw_ios_per_sec": 0, 00:24:17.104 "rw_mbytes_per_sec": 0, 00:24:17.104 "r_mbytes_per_sec": 0, 00:24:17.104 "w_mbytes_per_sec": 0 00:24:17.104 }, 00:24:17.104 "claimed": true, 00:24:17.104 "claim_type": "read_many_write_one", 00:24:17.104 "zoned": false, 00:24:17.104 "supported_io_types": { 00:24:17.104 "read": true, 00:24:17.104 "write": true, 00:24:17.104 "unmap": true, 00:24:17.104 "flush": true, 00:24:17.104 "reset": true, 00:24:17.104 "nvme_admin": true, 00:24:17.104 "nvme_io": true, 00:24:17.104 "nvme_io_md": false, 00:24:17.104 "write_zeroes": true, 00:24:17.104 "zcopy": false, 00:24:17.104 "get_zone_info": false, 00:24:17.104 "zone_management": false, 00:24:17.104 "zone_append": false, 00:24:17.104 "compare": true, 00:24:17.104 "compare_and_write": false, 00:24:17.104 "abort": true, 00:24:17.104 "seek_hole": false, 00:24:17.104 "seek_data": false, 00:24:17.104 "copy": true, 00:24:17.105 "nvme_iov_md": false 00:24:17.105 }, 00:24:17.105 "driver_specific": { 00:24:17.105 "nvme": [ 00:24:17.105 { 00:24:17.105 "pci_address": "0000:00:11.0", 00:24:17.105 "trid": { 00:24:17.105 "trtype": "PCIe", 00:24:17.105 "traddr": "0000:00:11.0" 00:24:17.105 }, 00:24:17.105 "ctrlr_data": { 00:24:17.105 "cntlid": 0, 00:24:17.105 "vendor_id": "0x1b36", 00:24:17.105 "model_number": "QEMU NVMe Ctrl", 00:24:17.105 "serial_number": "12341", 00:24:17.105 "firmware_revision": "8.0.0", 00:24:17.105 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:17.105 "oacs": { 00:24:17.105 "security": 0, 00:24:17.105 "format": 1, 00:24:17.105 "firmware": 0, 00:24:17.105 "ns_manage": 1 00:24:17.105 }, 00:24:17.105 "multi_ctrlr": false, 00:24:17.105 "ana_reporting": false 00:24:17.105 }, 00:24:17.105 "vs": { 00:24:17.105 "nvme_version": "1.4" 00:24:17.105 }, 00:24:17.105 "ns_data": { 00:24:17.105 "id": 1, 00:24:17.105 "can_share": false 00:24:17.105 } 00:24:17.105 } 00:24:17.105 ], 00:24:17.105 "mp_policy": "active_passive" 00:24:17.105 } 00:24:17.105 } 00:24:17.105 ]' 00:24:17.105 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:17.105 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:17.105 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:17.105 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:24:17.105 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:24:17.105 19:42:35 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:24:17.105 19:42:35 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:24:17.105 19:42:35 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:24:17.105 19:42:35 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:24:17.105 19:42:35 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:17.105 19:42:35 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:17.105 19:42:36 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=d58e7cc7-f2e9-4033-a2e4-b0429903fb2c 00:24:17.105 19:42:36 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:24:17.105 19:42:36 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d58e7cc7-f2e9-4033-a2e4-b0429903fb2c 00:24:17.362 19:42:36 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:24:17.620 19:42:36 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=d444492f-d811-45e6-8d3b-be0a9bf513b0 00:24:17.620 19:42:36 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u d444492f-d811-45e6-8d3b-be0a9bf513b0 00:24:17.879 19:42:36 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:17.879 19:42:36 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:24:17.879 19:42:36 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:17.879 19:42:36 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:24:17.879 19:42:36 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:17.879 19:42:36 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:17.879 19:42:36 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:24:17.879 19:42:36 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:17.879 19:42:36 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:17.879 19:42:36 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:17.879 19:42:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:17.879 19:42:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:17.879 19:42:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:18.137 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:18.137 { 00:24:18.137 "name": "8c5c59fd-cff2-479d-a983-6b9cd11f6044", 00:24:18.137 "aliases": [ 00:24:18.137 "lvs/nvme0n1p0" 00:24:18.137 ], 00:24:18.137 "product_name": "Logical Volume", 00:24:18.137 "block_size": 4096, 00:24:18.137 "num_blocks": 26476544, 00:24:18.137 "uuid": "8c5c59fd-cff2-479d-a983-6b9cd11f6044", 00:24:18.137 "assigned_rate_limits": { 00:24:18.137 "rw_ios_per_sec": 0, 00:24:18.137 "rw_mbytes_per_sec": 0, 00:24:18.137 "r_mbytes_per_sec": 0, 00:24:18.137 "w_mbytes_per_sec": 0 00:24:18.137 }, 00:24:18.137 "claimed": false, 00:24:18.137 "zoned": false, 00:24:18.137 "supported_io_types": { 00:24:18.137 "read": true, 00:24:18.137 "write": true, 00:24:18.137 "unmap": true, 00:24:18.137 "flush": false, 00:24:18.137 "reset": true, 00:24:18.137 "nvme_admin": false, 00:24:18.137 "nvme_io": false, 00:24:18.137 "nvme_io_md": false, 00:24:18.137 "write_zeroes": true, 00:24:18.137 "zcopy": false, 00:24:18.137 "get_zone_info": false, 00:24:18.137 "zone_management": false, 00:24:18.137 "zone_append": false, 00:24:18.137 "compare": false, 00:24:18.137 "compare_and_write": false, 00:24:18.137 "abort": false, 00:24:18.137 "seek_hole": true, 00:24:18.137 "seek_data": true, 00:24:18.137 "copy": false, 00:24:18.137 "nvme_iov_md": false 00:24:18.137 }, 00:24:18.137 "driver_specific": { 00:24:18.137 "lvol": { 00:24:18.137 "lvol_store_uuid": "d444492f-d811-45e6-8d3b-be0a9bf513b0", 00:24:18.137 "base_bdev": "nvme0n1", 00:24:18.137 "thin_provision": true, 00:24:18.137 "num_allocated_clusters": 0, 00:24:18.137 "snapshot": false, 00:24:18.137 "clone": false, 00:24:18.137 "esnap_clone": false 00:24:18.137 } 00:24:18.137 } 00:24:18.137 } 00:24:18.137 ]' 00:24:18.137 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:18.137 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:18.137 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:18.137 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:18.137 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:18.137 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:18.137 19:42:37 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:24:18.138 19:42:37 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:24:18.138 19:42:37 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:24:18.395 19:42:37 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:24:18.396 19:42:37 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:24:18.396 19:42:37 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:18.396 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:18.396 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:18.396 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:18.396 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:18.396 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:18.653 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:18.653 { 00:24:18.653 "name": "8c5c59fd-cff2-479d-a983-6b9cd11f6044", 00:24:18.653 "aliases": [ 00:24:18.653 "lvs/nvme0n1p0" 00:24:18.653 ], 00:24:18.653 "product_name": "Logical Volume", 00:24:18.653 "block_size": 4096, 00:24:18.653 "num_blocks": 26476544, 00:24:18.653 "uuid": "8c5c59fd-cff2-479d-a983-6b9cd11f6044", 00:24:18.653 "assigned_rate_limits": { 00:24:18.653 "rw_ios_per_sec": 0, 00:24:18.653 "rw_mbytes_per_sec": 0, 00:24:18.653 "r_mbytes_per_sec": 0, 00:24:18.653 "w_mbytes_per_sec": 0 00:24:18.653 }, 00:24:18.653 "claimed": false, 00:24:18.653 "zoned": false, 00:24:18.653 "supported_io_types": { 00:24:18.653 "read": true, 00:24:18.653 "write": true, 00:24:18.653 "unmap": true, 00:24:18.653 "flush": false, 00:24:18.653 "reset": true, 00:24:18.653 "nvme_admin": false, 00:24:18.653 "nvme_io": false, 00:24:18.653 "nvme_io_md": false, 00:24:18.653 "write_zeroes": true, 00:24:18.653 "zcopy": false, 00:24:18.653 "get_zone_info": false, 00:24:18.653 "zone_management": false, 00:24:18.653 "zone_append": false, 00:24:18.653 "compare": false, 00:24:18.653 "compare_and_write": false, 00:24:18.653 "abort": false, 00:24:18.653 "seek_hole": true, 00:24:18.653 "seek_data": true, 00:24:18.653 "copy": false, 00:24:18.653 "nvme_iov_md": false 00:24:18.653 }, 00:24:18.653 "driver_specific": { 00:24:18.653 "lvol": { 00:24:18.653 "lvol_store_uuid": "d444492f-d811-45e6-8d3b-be0a9bf513b0", 00:24:18.653 "base_bdev": "nvme0n1", 00:24:18.653 "thin_provision": true, 00:24:18.653 "num_allocated_clusters": 0, 00:24:18.653 "snapshot": false, 00:24:18.653 "clone": false, 00:24:18.653 "esnap_clone": false 00:24:18.653 } 00:24:18.653 } 00:24:18.653 } 00:24:18.653 ]' 00:24:18.653 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:18.911 19:42:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:24:18.911 19:42:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:24:18.911 19:42:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:24:18.911 19:42:37 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:24:18.911 19:42:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 8c5c59fd-cff2-479d-a983-6b9cd11f6044 00:24:19.168 19:42:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:24:19.168 { 00:24:19.168 "name": "8c5c59fd-cff2-479d-a983-6b9cd11f6044", 00:24:19.168 "aliases": [ 00:24:19.168 "lvs/nvme0n1p0" 00:24:19.168 ], 00:24:19.168 "product_name": "Logical Volume", 00:24:19.168 "block_size": 4096, 00:24:19.169 "num_blocks": 26476544, 00:24:19.169 "uuid": "8c5c59fd-cff2-479d-a983-6b9cd11f6044", 00:24:19.169 "assigned_rate_limits": { 00:24:19.169 "rw_ios_per_sec": 0, 00:24:19.169 "rw_mbytes_per_sec": 0, 00:24:19.169 "r_mbytes_per_sec": 0, 00:24:19.169 "w_mbytes_per_sec": 0 00:24:19.169 }, 00:24:19.169 "claimed": false, 00:24:19.169 "zoned": false, 00:24:19.169 "supported_io_types": { 00:24:19.169 "read": true, 00:24:19.169 "write": true, 00:24:19.169 "unmap": true, 00:24:19.169 "flush": false, 00:24:19.169 "reset": true, 00:24:19.169 "nvme_admin": false, 00:24:19.169 "nvme_io": false, 00:24:19.169 "nvme_io_md": false, 00:24:19.169 "write_zeroes": true, 00:24:19.169 "zcopy": false, 00:24:19.169 "get_zone_info": false, 00:24:19.169 "zone_management": false, 00:24:19.169 "zone_append": false, 00:24:19.169 "compare": false, 00:24:19.169 "compare_and_write": false, 00:24:19.169 "abort": false, 00:24:19.169 "seek_hole": true, 00:24:19.169 "seek_data": true, 00:24:19.169 "copy": false, 00:24:19.169 "nvme_iov_md": false 00:24:19.169 }, 00:24:19.169 "driver_specific": { 00:24:19.169 "lvol": { 00:24:19.169 "lvol_store_uuid": "d444492f-d811-45e6-8d3b-be0a9bf513b0", 00:24:19.169 "base_bdev": "nvme0n1", 00:24:19.169 "thin_provision": true, 00:24:19.169 "num_allocated_clusters": 0, 00:24:19.169 "snapshot": false, 00:24:19.169 "clone": false, 00:24:19.169 "esnap_clone": false 00:24:19.169 } 00:24:19.169 } 00:24:19.169 } 00:24:19.169 ]' 00:24:19.169 19:42:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:24:19.169 19:42:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:24:19.169 19:42:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:24:19.169 19:42:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:24:19.169 19:42:38 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:24:19.169 19:42:38 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:24:19.169 19:42:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:24:19.169 19:42:38 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 8c5c59fd-cff2-479d-a983-6b9cd11f6044 --l2p_dram_limit 10' 00:24:19.169 19:42:38 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:24:19.169 19:42:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:24:19.169 19:42:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:24:19.169 19:42:38 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:24:19.169 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:24:19.169 19:42:38 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 8c5c59fd-cff2-479d-a983-6b9cd11f6044 --l2p_dram_limit 10 -c nvc0n1p0 00:24:19.427 [2024-12-05 19:42:38.256595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.256654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:19.428 [2024-12-05 19:42:38.256672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:19.428 [2024-12-05 19:42:38.256681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.256735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.256744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:19.428 [2024-12-05 19:42:38.256754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:19.428 [2024-12-05 19:42:38.256762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.256788] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:19.428 [2024-12-05 19:42:38.257545] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:19.428 [2024-12-05 19:42:38.257576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.257584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:19.428 [2024-12-05 19:42:38.257594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:24:19.428 [2024-12-05 19:42:38.257602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.257733] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7b4ebe3b-0e39-46e8-b133-cc994bddeda9 00:24:19.428 [2024-12-05 19:42:38.258842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.258880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:24:19.428 [2024-12-05 19:42:38.258891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:19.428 [2024-12-05 19:42:38.258900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.264355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.264400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:19.428 [2024-12-05 19:42:38.264411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.405 ms 00:24:19.428 [2024-12-05 19:42:38.264420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.264516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.264527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:19.428 [2024-12-05 19:42:38.264536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:19.428 [2024-12-05 19:42:38.264548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.264615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.264631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:19.428 [2024-12-05 19:42:38.264642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:19.428 [2024-12-05 19:42:38.264651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.264673] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:19.428 [2024-12-05 19:42:38.268296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.268332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:19.428 [2024-12-05 19:42:38.268345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.626 ms 00:24:19.428 [2024-12-05 19:42:38.268354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.268395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.268403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:19.428 [2024-12-05 19:42:38.268412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:19.428 [2024-12-05 19:42:38.268419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.268438] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:24:19.428 [2024-12-05 19:42:38.268581] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:19.428 [2024-12-05 19:42:38.268602] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:19.428 [2024-12-05 19:42:38.268613] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:19.428 [2024-12-05 19:42:38.268625] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:19.428 [2024-12-05 19:42:38.268634] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:19.428 [2024-12-05 19:42:38.268643] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:19.428 [2024-12-05 19:42:38.268650] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:19.428 [2024-12-05 19:42:38.268662] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:19.428 [2024-12-05 19:42:38.268669] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:19.428 [2024-12-05 19:42:38.268679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.268692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:19.428 [2024-12-05 19:42:38.268701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:24:19.428 [2024-12-05 19:42:38.268708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.268794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.428 [2024-12-05 19:42:38.268802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:19.428 [2024-12-05 19:42:38.268811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:19.428 [2024-12-05 19:42:38.268818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.428 [2024-12-05 19:42:38.268947] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:19.428 [2024-12-05 19:42:38.268964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:19.428 [2024-12-05 19:42:38.268975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:19.428 [2024-12-05 19:42:38.268983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.428 [2024-12-05 19:42:38.268992] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:19.428 [2024-12-05 19:42:38.268999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:19.428 [2024-12-05 19:42:38.269015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:19.428 [2024-12-05 19:42:38.269024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:19.428 [2024-12-05 19:42:38.269040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:19.428 [2024-12-05 19:42:38.269046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:19.428 [2024-12-05 19:42:38.269054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:19.428 [2024-12-05 19:42:38.269061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:19.428 [2024-12-05 19:42:38.269069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:19.428 [2024-12-05 19:42:38.269075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:19.428 [2024-12-05 19:42:38.269092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:19.428 [2024-12-05 19:42:38.269099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:19.428 [2024-12-05 19:42:38.269115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.428 [2024-12-05 19:42:38.269146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:19.428 [2024-12-05 19:42:38.269154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.428 [2024-12-05 19:42:38.269170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:19.428 [2024-12-05 19:42:38.269178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.428 [2024-12-05 19:42:38.269193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:19.428 [2024-12-05 19:42:38.269200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:19.428 [2024-12-05 19:42:38.269214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:19.428 [2024-12-05 19:42:38.269224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:19.428 [2024-12-05 19:42:38.269239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:19.428 [2024-12-05 19:42:38.269245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:19.428 [2024-12-05 19:42:38.269254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:19.428 [2024-12-05 19:42:38.269261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:19.428 [2024-12-05 19:42:38.269269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:19.428 [2024-12-05 19:42:38.269276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:19.428 [2024-12-05 19:42:38.269291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:19.428 [2024-12-05 19:42:38.269299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.428 [2024-12-05 19:42:38.269306] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:19.428 [2024-12-05 19:42:38.269316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:19.428 [2024-12-05 19:42:38.269323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:19.428 [2024-12-05 19:42:38.269331] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:19.429 [2024-12-05 19:42:38.269339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:19.429 [2024-12-05 19:42:38.269348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:19.429 [2024-12-05 19:42:38.269355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:19.429 [2024-12-05 19:42:38.269364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:19.429 [2024-12-05 19:42:38.269370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:19.429 [2024-12-05 19:42:38.269378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:19.429 [2024-12-05 19:42:38.269386] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:19.429 [2024-12-05 19:42:38.269399] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:19.429 [2024-12-05 19:42:38.269408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:19.429 [2024-12-05 19:42:38.269417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:19.429 [2024-12-05 19:42:38.269424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:19.429 [2024-12-05 19:42:38.269433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:19.429 [2024-12-05 19:42:38.269440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:19.429 [2024-12-05 19:42:38.269449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:19.429 [2024-12-05 19:42:38.269456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:19.429 [2024-12-05 19:42:38.269465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:19.429 [2024-12-05 19:42:38.269472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:19.429 [2024-12-05 19:42:38.269483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:19.429 [2024-12-05 19:42:38.269489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:19.429 [2024-12-05 19:42:38.269498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:19.429 [2024-12-05 19:42:38.269505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:19.429 [2024-12-05 19:42:38.269514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:19.429 [2024-12-05 19:42:38.269520] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:19.429 [2024-12-05 19:42:38.269530] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:19.429 [2024-12-05 19:42:38.269537] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:19.429 [2024-12-05 19:42:38.269546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:19.429 [2024-12-05 19:42:38.269553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:19.429 [2024-12-05 19:42:38.269562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:19.429 [2024-12-05 19:42:38.269571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:19.429 [2024-12-05 19:42:38.269579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:19.429 [2024-12-05 19:42:38.269587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:24:19.429 [2024-12-05 19:42:38.269596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:19.429 [2024-12-05 19:42:38.269634] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:24:19.429 [2024-12-05 19:42:38.269646] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:21.955 [2024-12-05 19:42:40.426045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.955 [2024-12-05 19:42:40.426120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:21.955 [2024-12-05 19:42:40.426145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2156.402 ms 00:24:21.955 [2024-12-05 19:42:40.426156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.955 [2024-12-05 19:42:40.452361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.955 [2024-12-05 19:42:40.452424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:21.955 [2024-12-05 19:42:40.452437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.981 ms 00:24:21.955 [2024-12-05 19:42:40.452447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.955 [2024-12-05 19:42:40.452593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.955 [2024-12-05 19:42:40.452605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:21.955 [2024-12-05 19:42:40.452614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:21.955 [2024-12-05 19:42:40.452627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.955 [2024-12-05 19:42:40.483304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.955 [2024-12-05 19:42:40.483363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:21.955 [2024-12-05 19:42:40.483376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.642 ms 00:24:21.955 [2024-12-05 19:42:40.483388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.955 [2024-12-05 19:42:40.483427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.483440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:21.956 [2024-12-05 19:42:40.483448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:21.956 [2024-12-05 19:42:40.483465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.483862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.483898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:21.956 [2024-12-05 19:42:40.483907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:24:21.956 [2024-12-05 19:42:40.483917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.484040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.484058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:21.956 [2024-12-05 19:42:40.484069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:24:21.956 [2024-12-05 19:42:40.484079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.498317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.498367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:21.956 [2024-12-05 19:42:40.498379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.218 ms 00:24:21.956 [2024-12-05 19:42:40.498388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.525325] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:21.956 [2024-12-05 19:42:40.528145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.528187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:21.956 [2024-12-05 19:42:40.528204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.658 ms 00:24:21.956 [2024-12-05 19:42:40.528214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.587605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.587672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:21.956 [2024-12-05 19:42:40.587688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.330 ms 00:24:21.956 [2024-12-05 19:42:40.587697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.587874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.587887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:21.956 [2024-12-05 19:42:40.587900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:24:21.956 [2024-12-05 19:42:40.587907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.612321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.612380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:21.956 [2024-12-05 19:42:40.612394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.345 ms 00:24:21.956 [2024-12-05 19:42:40.612403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.636283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.636339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:21.956 [2024-12-05 19:42:40.636354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.832 ms 00:24:21.956 [2024-12-05 19:42:40.636362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.636924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.636945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:21.956 [2024-12-05 19:42:40.636956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:24:21.956 [2024-12-05 19:42:40.636966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.706763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.706828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:21.956 [2024-12-05 19:42:40.706847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.746 ms 00:24:21.956 [2024-12-05 19:42:40.706856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.732528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.732589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:21.956 [2024-12-05 19:42:40.732605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.551 ms 00:24:21.956 [2024-12-05 19:42:40.732614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.757835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.757898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:21.956 [2024-12-05 19:42:40.757912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.163 ms 00:24:21.956 [2024-12-05 19:42:40.757920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.782894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.782956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:21.956 [2024-12-05 19:42:40.782971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.887 ms 00:24:21.956 [2024-12-05 19:42:40.782979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.783043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.783053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:21.956 [2024-12-05 19:42:40.783066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:21.956 [2024-12-05 19:42:40.783074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.783174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:21.956 [2024-12-05 19:42:40.783188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:21.956 [2024-12-05 19:42:40.783198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:21.956 [2024-12-05 19:42:40.783205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:21.956 [2024-12-05 19:42:40.784156] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2527.109 ms, result 0 00:24:21.956 { 00:24:21.956 "name": "ftl0", 00:24:21.956 "uuid": "7b4ebe3b-0e39-46e8-b133-cc994bddeda9" 00:24:21.956 } 00:24:21.956 19:42:40 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:21.956 19:42:40 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:22.214 19:42:41 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:22.214 19:42:41 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:22.214 [2024-12-05 19:42:41.195643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.215 [2024-12-05 19:42:41.195703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:22.215 [2024-12-05 19:42:41.195716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:22.215 [2024-12-05 19:42:41.195725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.215 [2024-12-05 19:42:41.195750] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:22.215 [2024-12-05 19:42:41.198378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.215 [2024-12-05 19:42:41.198414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:22.215 [2024-12-05 19:42:41.198428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.608 ms 00:24:22.215 [2024-12-05 19:42:41.198437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.215 [2024-12-05 19:42:41.198700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.215 [2024-12-05 19:42:41.198722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:22.215 [2024-12-05 19:42:41.198733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:24:22.215 [2024-12-05 19:42:41.198741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.215 [2024-12-05 19:42:41.201962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.215 [2024-12-05 19:42:41.202004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:22.215 [2024-12-05 19:42:41.202016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.204 ms 00:24:22.215 [2024-12-05 19:42:41.202024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.215 [2024-12-05 19:42:41.208225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.215 [2024-12-05 19:42:41.208265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:22.215 [2024-12-05 19:42:41.208280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.177 ms 00:24:22.215 [2024-12-05 19:42:41.208287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.475 [2024-12-05 19:42:41.232444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.475 [2024-12-05 19:42:41.232501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:22.475 [2024-12-05 19:42:41.232515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.061 ms 00:24:22.475 [2024-12-05 19:42:41.232523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.475 [2024-12-05 19:42:41.248244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.475 [2024-12-05 19:42:41.248302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:22.475 [2024-12-05 19:42:41.248319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.653 ms 00:24:22.475 [2024-12-05 19:42:41.248328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.475 [2024-12-05 19:42:41.248524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.475 [2024-12-05 19:42:41.248535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:22.475 [2024-12-05 19:42:41.248546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:24:22.475 [2024-12-05 19:42:41.248553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.475 [2024-12-05 19:42:41.273374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.475 [2024-12-05 19:42:41.273445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:22.475 [2024-12-05 19:42:41.273459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.791 ms 00:24:22.475 [2024-12-05 19:42:41.273467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.475 [2024-12-05 19:42:41.297777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.475 [2024-12-05 19:42:41.297833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:22.475 [2024-12-05 19:42:41.297847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.241 ms 00:24:22.475 [2024-12-05 19:42:41.297854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.475 [2024-12-05 19:42:41.321679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.475 [2024-12-05 19:42:41.321735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:22.475 [2024-12-05 19:42:41.321750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.761 ms 00:24:22.475 [2024-12-05 19:42:41.321758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.475 [2024-12-05 19:42:41.345608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.475 [2024-12-05 19:42:41.345665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:22.475 [2024-12-05 19:42:41.345679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.730 ms 00:24:22.475 [2024-12-05 19:42:41.345687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.475 [2024-12-05 19:42:41.345743] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:22.475 [2024-12-05 19:42:41.345758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.345973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.346000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.346008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.346017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.346025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.346035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:22.475 [2024-12-05 19:42:41.346044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:22.476 [2024-12-05 19:42:41.346689] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:22.476 [2024-12-05 19:42:41.346699] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4ebe3b-0e39-46e8-b133-cc994bddeda9 00:24:22.476 [2024-12-05 19:42:41.346706] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:22.476 [2024-12-05 19:42:41.346716] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:22.476 [2024-12-05 19:42:41.346726] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:22.476 [2024-12-05 19:42:41.346735] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:22.476 [2024-12-05 19:42:41.346742] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:22.476 [2024-12-05 19:42:41.346751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:22.476 [2024-12-05 19:42:41.346758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:22.476 [2024-12-05 19:42:41.346766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:22.476 [2024-12-05 19:42:41.346773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:22.476 [2024-12-05 19:42:41.346781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.476 [2024-12-05 19:42:41.346788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:22.476 [2024-12-05 19:42:41.346798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.040 ms 00:24:22.476 [2024-12-05 19:42:41.346807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.476 [2024-12-05 19:42:41.359307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.476 [2024-12-05 19:42:41.359354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:22.476 [2024-12-05 19:42:41.359368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.448 ms 00:24:22.476 [2024-12-05 19:42:41.359377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.476 [2024-12-05 19:42:41.359755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:22.476 [2024-12-05 19:42:41.359771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:22.476 [2024-12-05 19:42:41.359785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:24:22.476 [2024-12-05 19:42:41.359793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.476 [2024-12-05 19:42:41.401023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.476 [2024-12-05 19:42:41.401077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:22.476 [2024-12-05 19:42:41.401091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.477 [2024-12-05 19:42:41.401099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.477 [2024-12-05 19:42:41.401176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.477 [2024-12-05 19:42:41.401185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:22.477 [2024-12-05 19:42:41.401198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.477 [2024-12-05 19:42:41.401208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.477 [2024-12-05 19:42:41.401309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.477 [2024-12-05 19:42:41.401319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:22.477 [2024-12-05 19:42:41.401328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.477 [2024-12-05 19:42:41.401336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.477 [2024-12-05 19:42:41.401356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.477 [2024-12-05 19:42:41.401364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:22.477 [2024-12-05 19:42:41.401373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.477 [2024-12-05 19:42:41.401382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.477 [2024-12-05 19:42:41.477965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.477 [2024-12-05 19:42:41.478043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:22.477 [2024-12-05 19:42:41.478057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.477 [2024-12-05 19:42:41.478064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.735 [2024-12-05 19:42:41.542596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.735 [2024-12-05 19:42:41.542649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:22.735 [2024-12-05 19:42:41.542663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.735 [2024-12-05 19:42:41.542675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.735 [2024-12-05 19:42:41.542753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.735 [2024-12-05 19:42:41.542763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:22.735 [2024-12-05 19:42:41.542772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.735 [2024-12-05 19:42:41.542779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.735 [2024-12-05 19:42:41.542843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.735 [2024-12-05 19:42:41.542852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:22.735 [2024-12-05 19:42:41.542862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.735 [2024-12-05 19:42:41.542869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.735 [2024-12-05 19:42:41.542968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.735 [2024-12-05 19:42:41.542979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:22.735 [2024-12-05 19:42:41.542988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.735 [2024-12-05 19:42:41.542996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.735 [2024-12-05 19:42:41.543028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.735 [2024-12-05 19:42:41.543037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:22.735 [2024-12-05 19:42:41.543046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.735 [2024-12-05 19:42:41.543053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.735 [2024-12-05 19:42:41.543091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.735 [2024-12-05 19:42:41.543099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:22.735 [2024-12-05 19:42:41.543109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.735 [2024-12-05 19:42:41.543116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.735 [2024-12-05 19:42:41.543181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:22.735 [2024-12-05 19:42:41.543193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:22.735 [2024-12-05 19:42:41.543202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:22.735 [2024-12-05 19:42:41.543210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:22.735 [2024-12-05 19:42:41.543337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.665 ms, result 0 00:24:22.735 true 00:24:22.735 19:42:41 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 77240 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77240 ']' 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77240 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77240 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:22.735 killing process with pid 77240 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77240' 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 77240 00:24:22.735 19:42:41 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 77240 00:24:29.292 19:42:47 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:32.610 262144+0 records in 00:24:32.610 262144+0 records out 00:24:32.610 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.54877 s, 303 MB/s 00:24:32.610 19:42:51 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:34.507 19:42:53 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:34.507 [2024-12-05 19:42:53.047941] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:24:34.507 [2024-12-05 19:42:53.048051] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77449 ] 00:24:34.507 [2024-12-05 19:42:53.199524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.507 [2024-12-05 19:42:53.299481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.766 [2024-12-05 19:42:53.557460] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.766 [2024-12-05 19:42:53.557533] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:34.766 [2024-12-05 19:42:53.710691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.710757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:34.766 [2024-12-05 19:42:53.710770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:34.766 [2024-12-05 19:42:53.710779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.710833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.710846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:34.766 [2024-12-05 19:42:53.710855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:34.766 [2024-12-05 19:42:53.710862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.710881] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:34.766 [2024-12-05 19:42:53.711658] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:34.766 [2024-12-05 19:42:53.711685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.711693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:34.766 [2024-12-05 19:42:53.711701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.807 ms 00:24:34.766 [2024-12-05 19:42:53.711709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.712886] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:34.766 [2024-12-05 19:42:53.725511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.725569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:34.766 [2024-12-05 19:42:53.725582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.623 ms 00:24:34.766 [2024-12-05 19:42:53.725590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.725680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.725691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:34.766 [2024-12-05 19:42:53.725700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:34.766 [2024-12-05 19:42:53.725708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.731152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.731194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:34.766 [2024-12-05 19:42:53.731205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.359 ms 00:24:34.766 [2024-12-05 19:42:53.731218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.731297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.731306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:34.766 [2024-12-05 19:42:53.731314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:24:34.766 [2024-12-05 19:42:53.731321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.731379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.731389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:34.766 [2024-12-05 19:42:53.731397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:34.766 [2024-12-05 19:42:53.731404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.731429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:34.766 [2024-12-05 19:42:53.734801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.734836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:34.766 [2024-12-05 19:42:53.734848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:24:34.766 [2024-12-05 19:42:53.734856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.734892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.766 [2024-12-05 19:42:53.734901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:34.766 [2024-12-05 19:42:53.734909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:34.766 [2024-12-05 19:42:53.734917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.766 [2024-12-05 19:42:53.734938] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:34.766 [2024-12-05 19:42:53.734958] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:34.766 [2024-12-05 19:42:53.734993] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:34.766 [2024-12-05 19:42:53.735011] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:34.766 [2024-12-05 19:42:53.735112] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:34.766 [2024-12-05 19:42:53.735124] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:34.766 [2024-12-05 19:42:53.735146] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:34.766 [2024-12-05 19:42:53.735156] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:34.766 [2024-12-05 19:42:53.735165] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:34.766 [2024-12-05 19:42:53.735173] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:34.766 [2024-12-05 19:42:53.735181] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:34.767 [2024-12-05 19:42:53.735191] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:34.767 [2024-12-05 19:42:53.735198] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:34.767 [2024-12-05 19:42:53.735206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.767 [2024-12-05 19:42:53.735213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:34.767 [2024-12-05 19:42:53.735221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:24:34.767 [2024-12-05 19:42:53.735228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.767 [2024-12-05 19:42:53.735312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.767 [2024-12-05 19:42:53.735320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:34.767 [2024-12-05 19:42:53.735327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:24:34.767 [2024-12-05 19:42:53.735334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.767 [2024-12-05 19:42:53.735459] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:34.767 [2024-12-05 19:42:53.735480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:34.767 [2024-12-05 19:42:53.735488] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.767 [2024-12-05 19:42:53.735496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:34.767 [2024-12-05 19:42:53.735512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:34.767 [2024-12-05 19:42:53.735525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:34.767 [2024-12-05 19:42:53.735532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.767 [2024-12-05 19:42:53.735545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:34.767 [2024-12-05 19:42:53.735552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:34.767 [2024-12-05 19:42:53.735558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:34.767 [2024-12-05 19:42:53.735570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:34.767 [2024-12-05 19:42:53.735576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:34.767 [2024-12-05 19:42:53.735583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:34.767 [2024-12-05 19:42:53.735596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:34.767 [2024-12-05 19:42:53.735603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:34.767 [2024-12-05 19:42:53.735616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.767 [2024-12-05 19:42:53.735628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:34.767 [2024-12-05 19:42:53.735635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.767 [2024-12-05 19:42:53.735647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:34.767 [2024-12-05 19:42:53.735654] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.767 [2024-12-05 19:42:53.735666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:34.767 [2024-12-05 19:42:53.735672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:34.767 [2024-12-05 19:42:53.735685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:34.767 [2024-12-05 19:42:53.735691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.767 [2024-12-05 19:42:53.735703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:34.767 [2024-12-05 19:42:53.735710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:34.767 [2024-12-05 19:42:53.735716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:34.767 [2024-12-05 19:42:53.735722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:34.767 [2024-12-05 19:42:53.735729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:34.767 [2024-12-05 19:42:53.735735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:34.767 [2024-12-05 19:42:53.735748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:34.767 [2024-12-05 19:42:53.735754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735761] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:34.767 [2024-12-05 19:42:53.735768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:34.767 [2024-12-05 19:42:53.735775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:34.767 [2024-12-05 19:42:53.735784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:34.767 [2024-12-05 19:42:53.735791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:34.767 [2024-12-05 19:42:53.735797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:34.767 [2024-12-05 19:42:53.735803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:34.767 [2024-12-05 19:42:53.735812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:34.767 [2024-12-05 19:42:53.735818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:34.767 [2024-12-05 19:42:53.735824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:34.767 [2024-12-05 19:42:53.735833] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:34.767 [2024-12-05 19:42:53.735841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.767 [2024-12-05 19:42:53.735852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:34.767 [2024-12-05 19:42:53.735860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:34.767 [2024-12-05 19:42:53.735867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:34.767 [2024-12-05 19:42:53.735873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:34.767 [2024-12-05 19:42:53.735880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:34.767 [2024-12-05 19:42:53.735887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:34.767 [2024-12-05 19:42:53.735894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:34.767 [2024-12-05 19:42:53.735901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:34.767 [2024-12-05 19:42:53.735908] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:34.767 [2024-12-05 19:42:53.735916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:34.767 [2024-12-05 19:42:53.735923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:34.767 [2024-12-05 19:42:53.735929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:34.767 [2024-12-05 19:42:53.735937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:34.767 [2024-12-05 19:42:53.735944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:34.767 [2024-12-05 19:42:53.735951] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:34.767 [2024-12-05 19:42:53.735959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:34.767 [2024-12-05 19:42:53.735967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:34.767 [2024-12-05 19:42:53.735974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:34.767 [2024-12-05 19:42:53.735981] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:34.767 [2024-12-05 19:42:53.735989] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:34.767 [2024-12-05 19:42:53.735995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.767 [2024-12-05 19:42:53.736002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:34.767 [2024-12-05 19:42:53.736009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:24:34.767 [2024-12-05 19:42:53.736016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.767 [2024-12-05 19:42:53.762076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.767 [2024-12-05 19:42:53.762136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:34.767 [2024-12-05 19:42:53.762149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.004 ms 00:24:34.767 [2024-12-05 19:42:53.762161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:34.767 [2024-12-05 19:42:53.762261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:34.767 [2024-12-05 19:42:53.762269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:34.767 [2024-12-05 19:42:53.762277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:24:34.767 [2024-12-05 19:42:53.762284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.026 [2024-12-05 19:42:53.809596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.026 [2024-12-05 19:42:53.809653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:35.026 [2024-12-05 19:42:53.809666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.242 ms 00:24:35.026 [2024-12-05 19:42:53.809674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.026 [2024-12-05 19:42:53.809732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.026 [2024-12-05 19:42:53.809741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:35.027 [2024-12-05 19:42:53.809753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:35.027 [2024-12-05 19:42:53.809760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.810207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.810234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:35.027 [2024-12-05 19:42:53.810244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.371 ms 00:24:35.027 [2024-12-05 19:42:53.810251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.810381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.810397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:35.027 [2024-12-05 19:42:53.810410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:24:35.027 [2024-12-05 19:42:53.810418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.823639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.823687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:35.027 [2024-12-05 19:42:53.823699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.202 ms 00:24:35.027 [2024-12-05 19:42:53.823707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.835984] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:35.027 [2024-12-05 19:42:53.836039] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:35.027 [2024-12-05 19:42:53.836052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.836060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:35.027 [2024-12-05 19:42:53.836071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.231 ms 00:24:35.027 [2024-12-05 19:42:53.836079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.860712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.860783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:35.027 [2024-12-05 19:42:53.860796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.561 ms 00:24:35.027 [2024-12-05 19:42:53.860804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.873161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.873213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:35.027 [2024-12-05 19:42:53.873225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.288 ms 00:24:35.027 [2024-12-05 19:42:53.873233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.884730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.884778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:35.027 [2024-12-05 19:42:53.884789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.448 ms 00:24:35.027 [2024-12-05 19:42:53.884797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.885463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.885489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:35.027 [2024-12-05 19:42:53.885498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:24:35.027 [2024-12-05 19:42:53.885509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.941648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.941704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:35.027 [2024-12-05 19:42:53.941717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.121 ms 00:24:35.027 [2024-12-05 19:42:53.941731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.952782] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:35.027 [2024-12-05 19:42:53.955522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.955560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:35.027 [2024-12-05 19:42:53.955572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.728 ms 00:24:35.027 [2024-12-05 19:42:53.955581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.955691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.955702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:35.027 [2024-12-05 19:42:53.955711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:35.027 [2024-12-05 19:42:53.955718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.955784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.955795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:35.027 [2024-12-05 19:42:53.955803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:35.027 [2024-12-05 19:42:53.955810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.955829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.955837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:35.027 [2024-12-05 19:42:53.955845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:35.027 [2024-12-05 19:42:53.955852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.955881] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:35.027 [2024-12-05 19:42:53.955898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.955907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:35.027 [2024-12-05 19:42:53.955915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:35.027 [2024-12-05 19:42:53.955922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.979719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.979772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:35.027 [2024-12-05 19:42:53.979786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.777 ms 00:24:35.027 [2024-12-05 19:42:53.979799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.979881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:35.027 [2024-12-05 19:42:53.979891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:35.027 [2024-12-05 19:42:53.979900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:24:35.027 [2024-12-05 19:42:53.979907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:35.027 [2024-12-05 19:42:53.981457] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.323 ms, result 0 00:24:36.399  [2024-12-05T19:42:56.335Z] Copying: 44/1024 [MB] (44 MBps) [2024-12-05T19:42:57.267Z] Copying: 90/1024 [MB] (45 MBps) [2024-12-05T19:42:58.201Z] Copying: 135/1024 [MB] (45 MBps) [2024-12-05T19:42:59.133Z] Copying: 179/1024 [MB] (43 MBps) [2024-12-05T19:43:00.065Z] Copying: 223/1024 [MB] (44 MBps) [2024-12-05T19:43:00.998Z] Copying: 267/1024 [MB] (43 MBps) [2024-12-05T19:43:02.369Z] Copying: 312/1024 [MB] (44 MBps) [2024-12-05T19:43:03.302Z] Copying: 355/1024 [MB] (43 MBps) [2024-12-05T19:43:04.235Z] Copying: 399/1024 [MB] (44 MBps) [2024-12-05T19:43:05.165Z] Copying: 444/1024 [MB] (44 MBps) [2024-12-05T19:43:06.096Z] Copying: 490/1024 [MB] (45 MBps) [2024-12-05T19:43:07.032Z] Copying: 535/1024 [MB] (45 MBps) [2024-12-05T19:43:08.405Z] Copying: 572/1024 [MB] (37 MBps) [2024-12-05T19:43:09.347Z] Copying: 622/1024 [MB] (49 MBps) [2024-12-05T19:43:10.276Z] Copying: 667/1024 [MB] (45 MBps) [2024-12-05T19:43:11.207Z] Copying: 705/1024 [MB] (37 MBps) [2024-12-05T19:43:12.142Z] Copying: 754/1024 [MB] (48 MBps) [2024-12-05T19:43:13.074Z] Copying: 797/1024 [MB] (43 MBps) [2024-12-05T19:43:14.005Z] Copying: 841/1024 [MB] (43 MBps) [2024-12-05T19:43:15.374Z] Copying: 875/1024 [MB] (33 MBps) [2024-12-05T19:43:16.306Z] Copying: 919/1024 [MB] (43 MBps) [2024-12-05T19:43:17.240Z] Copying: 963/1024 [MB] (44 MBps) [2024-12-05T19:43:17.498Z] Copying: 1008/1024 [MB] (45 MBps) [2024-12-05T19:43:17.499Z] Copying: 1024/1024 [MB] (average 43 MBps)[2024-12-05 19:43:17.330964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.331019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:58.493 [2024-12-05 19:43:17.331032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:58.493 [2024-12-05 19:43:17.331041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.331062] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:58.493 [2024-12-05 19:43:17.333684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.333722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:58.493 [2024-12-05 19:43:17.333741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.606 ms 00:24:58.493 [2024-12-05 19:43:17.333750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.335166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.335196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:58.493 [2024-12-05 19:43:17.335205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.393 ms 00:24:58.493 [2024-12-05 19:43:17.335213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.348823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.348878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:58.493 [2024-12-05 19:43:17.348890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.591 ms 00:24:58.493 [2024-12-05 19:43:17.348898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.355439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.355480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:58.493 [2024-12-05 19:43:17.355492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.501 ms 00:24:58.493 [2024-12-05 19:43:17.355500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.379698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.379753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:58.493 [2024-12-05 19:43:17.379766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.130 ms 00:24:58.493 [2024-12-05 19:43:17.379774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.394337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.394387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:58.493 [2024-12-05 19:43:17.394401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.505 ms 00:24:58.493 [2024-12-05 19:43:17.394408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.394575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.394588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:58.493 [2024-12-05 19:43:17.394597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:24:58.493 [2024-12-05 19:43:17.394605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.419018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.419063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:58.493 [2024-12-05 19:43:17.419075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.397 ms 00:24:58.493 [2024-12-05 19:43:17.419082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.442907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.442952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:58.493 [2024-12-05 19:43:17.442964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.761 ms 00:24:58.493 [2024-12-05 19:43:17.442971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.466218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.466265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:58.493 [2024-12-05 19:43:17.466277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.197 ms 00:24:58.493 [2024-12-05 19:43:17.466285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.489280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.493 [2024-12-05 19:43:17.489327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:58.493 [2024-12-05 19:43:17.489339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.924 ms 00:24:58.493 [2024-12-05 19:43:17.489346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.493 [2024-12-05 19:43:17.489397] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:58.493 [2024-12-05 19:43:17.489412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:58.493 [2024-12-05 19:43:17.489773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.489996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:58.494 [2024-12-05 19:43:17.490213] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:58.494 [2024-12-05 19:43:17.490223] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4ebe3b-0e39-46e8-b133-cc994bddeda9 00:24:58.494 [2024-12-05 19:43:17.490233] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:58.494 [2024-12-05 19:43:17.490240] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:58.494 [2024-12-05 19:43:17.490247] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:58.494 [2024-12-05 19:43:17.490255] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:58.494 [2024-12-05 19:43:17.490262] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:58.494 [2024-12-05 19:43:17.490276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:58.494 [2024-12-05 19:43:17.490284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:58.494 [2024-12-05 19:43:17.490290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:58.494 [2024-12-05 19:43:17.490298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:58.494 [2024-12-05 19:43:17.490305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.494 [2024-12-05 19:43:17.490312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:58.494 [2024-12-05 19:43:17.490320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:24:58.494 [2024-12-05 19:43:17.490327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.502899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.752 [2024-12-05 19:43:17.502942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:58.752 [2024-12-05 19:43:17.502954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.547 ms 00:24:58.752 [2024-12-05 19:43:17.502962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.503350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:58.752 [2024-12-05 19:43:17.503364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:58.752 [2024-12-05 19:43:17.503373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:24:58.752 [2024-12-05 19:43:17.503384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.535679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.752 [2024-12-05 19:43:17.535728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:58.752 [2024-12-05 19:43:17.535739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.752 [2024-12-05 19:43:17.535747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.535813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.752 [2024-12-05 19:43:17.535820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:58.752 [2024-12-05 19:43:17.535829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.752 [2024-12-05 19:43:17.535841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.535905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.752 [2024-12-05 19:43:17.535914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:58.752 [2024-12-05 19:43:17.535922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.752 [2024-12-05 19:43:17.535929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.535944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.752 [2024-12-05 19:43:17.535952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:58.752 [2024-12-05 19:43:17.535959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.752 [2024-12-05 19:43:17.535966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.614357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.752 [2024-12-05 19:43:17.614408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:58.752 [2024-12-05 19:43:17.614420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.752 [2024-12-05 19:43:17.614428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.678447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.752 [2024-12-05 19:43:17.678494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:58.752 [2024-12-05 19:43:17.678505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.752 [2024-12-05 19:43:17.678522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.678589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.752 [2024-12-05 19:43:17.678599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:58.752 [2024-12-05 19:43:17.678606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.752 [2024-12-05 19:43:17.678614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.752 [2024-12-05 19:43:17.678647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.753 [2024-12-05 19:43:17.678655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:58.753 [2024-12-05 19:43:17.678663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.753 [2024-12-05 19:43:17.678670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.753 [2024-12-05 19:43:17.678757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.753 [2024-12-05 19:43:17.678767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:58.753 [2024-12-05 19:43:17.678775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.753 [2024-12-05 19:43:17.678782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.753 [2024-12-05 19:43:17.678810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.753 [2024-12-05 19:43:17.678819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:58.753 [2024-12-05 19:43:17.678827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.753 [2024-12-05 19:43:17.678834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.753 [2024-12-05 19:43:17.678866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.753 [2024-12-05 19:43:17.678877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:58.753 [2024-12-05 19:43:17.678884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.753 [2024-12-05 19:43:17.678892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.753 [2024-12-05 19:43:17.678928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:58.753 [2024-12-05 19:43:17.678937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:58.753 [2024-12-05 19:43:17.678945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:58.753 [2024-12-05 19:43:17.678952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:58.753 [2024-12-05 19:43:17.679060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 348.069 ms, result 0 00:25:01.278 00:25:01.278 00:25:01.278 19:43:19 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:01.278 [2024-12-05 19:43:19.799686] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:25:01.278 [2024-12-05 19:43:19.799820] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77721 ] 00:25:01.278 [2024-12-05 19:43:19.959648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.278 [2024-12-05 19:43:20.062819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.537 [2024-12-05 19:43:20.322052] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:01.537 [2024-12-05 19:43:20.322123] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:01.537 [2024-12-05 19:43:20.475684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.537 [2024-12-05 19:43:20.475750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:01.537 [2024-12-05 19:43:20.475764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:01.537 [2024-12-05 19:43:20.475773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.537 [2024-12-05 19:43:20.475827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.537 [2024-12-05 19:43:20.475840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:01.537 [2024-12-05 19:43:20.475848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:01.537 [2024-12-05 19:43:20.475855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.537 [2024-12-05 19:43:20.475875] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:01.537 [2024-12-05 19:43:20.476594] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:01.537 [2024-12-05 19:43:20.476625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.537 [2024-12-05 19:43:20.476633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:01.537 [2024-12-05 19:43:20.476642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:25:01.537 [2024-12-05 19:43:20.476649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.537 [2024-12-05 19:43:20.477727] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:01.538 [2024-12-05 19:43:20.489870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.538 [2024-12-05 19:43:20.489919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:01.538 [2024-12-05 19:43:20.489933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.144 ms 00:25:01.538 [2024-12-05 19:43:20.489941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.538 [2024-12-05 19:43:20.490021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.538 [2024-12-05 19:43:20.490032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:01.538 [2024-12-05 19:43:20.490040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:25:01.538 [2024-12-05 19:43:20.490047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.538 [2024-12-05 19:43:20.495111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.538 [2024-12-05 19:43:20.495160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:01.538 [2024-12-05 19:43:20.495170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.993 ms 00:25:01.538 [2024-12-05 19:43:20.495182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.538 [2024-12-05 19:43:20.495256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.538 [2024-12-05 19:43:20.495265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:01.538 [2024-12-05 19:43:20.495273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:25:01.538 [2024-12-05 19:43:20.495280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.538 [2024-12-05 19:43:20.495328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.538 [2024-12-05 19:43:20.495338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:01.538 [2024-12-05 19:43:20.495346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:01.538 [2024-12-05 19:43:20.495353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.538 [2024-12-05 19:43:20.495378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:01.538 [2024-12-05 19:43:20.498693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.538 [2024-12-05 19:43:20.498727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:01.538 [2024-12-05 19:43:20.498738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.321 ms 00:25:01.538 [2024-12-05 19:43:20.498745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.538 [2024-12-05 19:43:20.498778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.538 [2024-12-05 19:43:20.498787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:01.538 [2024-12-05 19:43:20.498795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:01.538 [2024-12-05 19:43:20.498802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.538 [2024-12-05 19:43:20.498822] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:01.538 [2024-12-05 19:43:20.498841] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:01.538 [2024-12-05 19:43:20.498875] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:01.538 [2024-12-05 19:43:20.498893] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:01.538 [2024-12-05 19:43:20.498994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:01.538 [2024-12-05 19:43:20.499046] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:01.538 [2024-12-05 19:43:20.499056] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:01.538 [2024-12-05 19:43:20.499066] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:01.538 [2024-12-05 19:43:20.499075] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:01.538 [2024-12-05 19:43:20.499083] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:01.538 [2024-12-05 19:43:20.499090] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:01.538 [2024-12-05 19:43:20.499100] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:01.538 [2024-12-05 19:43:20.499107] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:01.538 [2024-12-05 19:43:20.499114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.538 [2024-12-05 19:43:20.499121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:01.538 [2024-12-05 19:43:20.499141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:25:01.538 [2024-12-05 19:43:20.499148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.538 [2024-12-05 19:43:20.499231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.538 [2024-12-05 19:43:20.499239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:01.538 [2024-12-05 19:43:20.499247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:01.538 [2024-12-05 19:43:20.499254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.538 [2024-12-05 19:43:20.499373] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:01.538 [2024-12-05 19:43:20.499385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:01.538 [2024-12-05 19:43:20.499392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:01.538 [2024-12-05 19:43:20.499400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.538 [2024-12-05 19:43:20.499408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:01.538 [2024-12-05 19:43:20.499414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:01.538 [2024-12-05 19:43:20.499421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:01.538 [2024-12-05 19:43:20.499427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:01.538 [2024-12-05 19:43:20.499434] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:01.538 [2024-12-05 19:43:20.499441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:01.538 [2024-12-05 19:43:20.499447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:01.538 [2024-12-05 19:43:20.499454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:01.538 [2024-12-05 19:43:20.499460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:01.538 [2024-12-05 19:43:20.499473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:01.538 [2024-12-05 19:43:20.499481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:01.538 [2024-12-05 19:43:20.499487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.538 [2024-12-05 19:43:20.499493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:01.538 [2024-12-05 19:43:20.499500] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:01.538 [2024-12-05 19:43:20.499506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.538 [2024-12-05 19:43:20.499512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:01.538 [2024-12-05 19:43:20.499518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:01.538 [2024-12-05 19:43:20.499524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.538 [2024-12-05 19:43:20.499531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:01.538 [2024-12-05 19:43:20.499537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:01.538 [2024-12-05 19:43:20.499543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.539 [2024-12-05 19:43:20.499549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:01.539 [2024-12-05 19:43:20.499555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:01.539 [2024-12-05 19:43:20.499561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.539 [2024-12-05 19:43:20.499567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:01.539 [2024-12-05 19:43:20.499574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:01.539 [2024-12-05 19:43:20.499580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:01.539 [2024-12-05 19:43:20.499586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:01.539 [2024-12-05 19:43:20.499592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:01.539 [2024-12-05 19:43:20.499598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:01.539 [2024-12-05 19:43:20.499605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:01.539 [2024-12-05 19:43:20.499612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:01.539 [2024-12-05 19:43:20.499618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:01.539 [2024-12-05 19:43:20.499624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:01.539 [2024-12-05 19:43:20.499631] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:01.539 [2024-12-05 19:43:20.499637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.539 [2024-12-05 19:43:20.499643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:01.539 [2024-12-05 19:43:20.499650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:01.539 [2024-12-05 19:43:20.499656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.539 [2024-12-05 19:43:20.499663] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:01.539 [2024-12-05 19:43:20.499670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:01.539 [2024-12-05 19:43:20.499677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:01.539 [2024-12-05 19:43:20.499685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:01.539 [2024-12-05 19:43:20.499692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:01.539 [2024-12-05 19:43:20.499699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:01.539 [2024-12-05 19:43:20.499705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:01.539 [2024-12-05 19:43:20.499711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:01.539 [2024-12-05 19:43:20.499717] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:01.539 [2024-12-05 19:43:20.499724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:01.539 [2024-12-05 19:43:20.499732] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:01.539 [2024-12-05 19:43:20.499740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:01.539 [2024-12-05 19:43:20.499750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:01.539 [2024-12-05 19:43:20.499757] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:01.539 [2024-12-05 19:43:20.499764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:01.539 [2024-12-05 19:43:20.499771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:01.539 [2024-12-05 19:43:20.499777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:01.539 [2024-12-05 19:43:20.499784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:01.539 [2024-12-05 19:43:20.499791] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:01.539 [2024-12-05 19:43:20.499798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:01.539 [2024-12-05 19:43:20.499804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:01.539 [2024-12-05 19:43:20.499811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:01.539 [2024-12-05 19:43:20.499818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:01.539 [2024-12-05 19:43:20.499825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:01.539 [2024-12-05 19:43:20.499831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:01.539 [2024-12-05 19:43:20.499839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:01.539 [2024-12-05 19:43:20.499845] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:01.539 [2024-12-05 19:43:20.499853] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:01.539 [2024-12-05 19:43:20.499861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:01.539 [2024-12-05 19:43:20.499868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:01.539 [2024-12-05 19:43:20.499874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:01.539 [2024-12-05 19:43:20.499881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:01.539 [2024-12-05 19:43:20.499888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.539 [2024-12-05 19:43:20.499895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:01.539 [2024-12-05 19:43:20.499903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:25:01.539 [2024-12-05 19:43:20.499910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.539 [2024-12-05 19:43:20.525562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.539 [2024-12-05 19:43:20.525612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:01.539 [2024-12-05 19:43:20.525627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.605 ms 00:25:01.539 [2024-12-05 19:43:20.525634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.539 [2024-12-05 19:43:20.525728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.539 [2024-12-05 19:43:20.525737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:01.539 [2024-12-05 19:43:20.525746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:01.539 [2024-12-05 19:43:20.525757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.798 [2024-12-05 19:43:20.573489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.798 [2024-12-05 19:43:20.573545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:01.798 [2024-12-05 19:43:20.573558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.668 ms 00:25:01.798 [2024-12-05 19:43:20.573566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.798 [2024-12-05 19:43:20.573625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.798 [2024-12-05 19:43:20.573637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:01.798 [2024-12-05 19:43:20.573646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:01.798 [2024-12-05 19:43:20.573653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.798 [2024-12-05 19:43:20.574057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.798 [2024-12-05 19:43:20.574085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:01.798 [2024-12-05 19:43:20.574094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:25:01.798 [2024-12-05 19:43:20.574102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.798 [2024-12-05 19:43:20.574244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.798 [2024-12-05 19:43:20.574267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:01.798 [2024-12-05 19:43:20.574275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:25:01.798 [2024-12-05 19:43:20.574284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.798 [2024-12-05 19:43:20.587222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.798 [2024-12-05 19:43:20.587267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:01.798 [2024-12-05 19:43:20.587279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.919 ms 00:25:01.798 [2024-12-05 19:43:20.587286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.798 [2024-12-05 19:43:20.599707] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:01.798 [2024-12-05 19:43:20.599755] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:01.798 [2024-12-05 19:43:20.599768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.798 [2024-12-05 19:43:20.599776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:01.799 [2024-12-05 19:43:20.599787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.370 ms 00:25:01.799 [2024-12-05 19:43:20.599796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.624185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.624258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:01.799 [2024-12-05 19:43:20.624272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.337 ms 00:25:01.799 [2024-12-05 19:43:20.624282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.636929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.636981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:01.799 [2024-12-05 19:43:20.636993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.575 ms 00:25:01.799 [2024-12-05 19:43:20.637000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.648816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.648867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:01.799 [2024-12-05 19:43:20.648879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.764 ms 00:25:01.799 [2024-12-05 19:43:20.648886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.649549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.649571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:01.799 [2024-12-05 19:43:20.649580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:25:01.799 [2024-12-05 19:43:20.649587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.705821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.705886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:01.799 [2024-12-05 19:43:20.705899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.214 ms 00:25:01.799 [2024-12-05 19:43:20.705907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.716847] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:01.799 [2024-12-05 19:43:20.719591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.719627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:01.799 [2024-12-05 19:43:20.719640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.603 ms 00:25:01.799 [2024-12-05 19:43:20.719648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.719757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.719767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:01.799 [2024-12-05 19:43:20.719779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:01.799 [2024-12-05 19:43:20.719787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.719851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.719861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:01.799 [2024-12-05 19:43:20.719870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:01.799 [2024-12-05 19:43:20.719877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.719895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.719903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:01.799 [2024-12-05 19:43:20.719910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:01.799 [2024-12-05 19:43:20.719920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.719949] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:01.799 [2024-12-05 19:43:20.719959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.719966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:01.799 [2024-12-05 19:43:20.719973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:01.799 [2024-12-05 19:43:20.719981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.743442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.743495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:01.799 [2024-12-05 19:43:20.743511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.443 ms 00:25:01.799 [2024-12-05 19:43:20.743519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.743603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:01.799 [2024-12-05 19:43:20.743613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:01.799 [2024-12-05 19:43:20.743621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:01.799 [2024-12-05 19:43:20.743629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:01.799 [2024-12-05 19:43:20.744582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.475 ms, result 0 00:25:03.189  [2024-12-05T19:43:23.189Z] Copying: 42/1024 [MB] (42 MBps) [2024-12-05T19:43:24.143Z] Copying: 85/1024 [MB] (42 MBps) [2024-12-05T19:43:25.074Z] Copying: 131/1024 [MB] (46 MBps) [2024-12-05T19:43:26.005Z] Copying: 179/1024 [MB] (47 MBps) [2024-12-05T19:43:26.938Z] Copying: 226/1024 [MB] (47 MBps) [2024-12-05T19:43:28.312Z] Copying: 273/1024 [MB] (47 MBps) [2024-12-05T19:43:29.246Z] Copying: 318/1024 [MB] (44 MBps) [2024-12-05T19:43:30.180Z] Copying: 364/1024 [MB] (45 MBps) [2024-12-05T19:43:31.114Z] Copying: 408/1024 [MB] (44 MBps) [2024-12-05T19:43:32.049Z] Copying: 456/1024 [MB] (47 MBps) [2024-12-05T19:43:32.983Z] Copying: 502/1024 [MB] (46 MBps) [2024-12-05T19:43:34.355Z] Copying: 550/1024 [MB] (47 MBps) [2024-12-05T19:43:34.944Z] Copying: 597/1024 [MB] (47 MBps) [2024-12-05T19:43:36.341Z] Copying: 641/1024 [MB] (43 MBps) [2024-12-05T19:43:37.275Z] Copying: 686/1024 [MB] (45 MBps) [2024-12-05T19:43:38.212Z] Copying: 732/1024 [MB] (45 MBps) [2024-12-05T19:43:39.147Z] Copying: 778/1024 [MB] (46 MBps) [2024-12-05T19:43:40.086Z] Copying: 822/1024 [MB] (43 MBps) [2024-12-05T19:43:41.019Z] Copying: 873/1024 [MB] (51 MBps) [2024-12-05T19:43:41.987Z] Copying: 922/1024 [MB] (48 MBps) [2024-12-05T19:43:43.355Z] Copying: 970/1024 [MB] (47 MBps) [2024-12-05T19:43:43.355Z] Copying: 1020/1024 [MB] (49 MBps) [2024-12-05T19:43:43.612Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-12-05 19:43:43.571728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.606 [2024-12-05 19:43:43.571797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:24.606 [2024-12-05 19:43:43.571816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:24.606 [2024-12-05 19:43:43.571826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.606 [2024-12-05 19:43:43.571853] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:24.606 [2024-12-05 19:43:43.575162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.606 [2024-12-05 19:43:43.575208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:24.606 [2024-12-05 19:43:43.575222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.291 ms 00:25:24.606 [2024-12-05 19:43:43.575233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.606 [2024-12-05 19:43:43.575511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.606 [2024-12-05 19:43:43.575522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:24.606 [2024-12-05 19:43:43.575533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:25:24.606 [2024-12-05 19:43:43.575542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.606 [2024-12-05 19:43:43.579749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.606 [2024-12-05 19:43:43.579851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:24.606 [2024-12-05 19:43:43.579912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.189 ms 00:25:24.606 [2024-12-05 19:43:43.579934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.606 [2024-12-05 19:43:43.586089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.606 [2024-12-05 19:43:43.586252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:24.606 [2024-12-05 19:43:43.586308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.118 ms 00:25:24.606 [2024-12-05 19:43:43.586330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.865 [2024-12-05 19:43:43.613048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.865 [2024-12-05 19:43:43.613272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:24.865 [2024-12-05 19:43:43.613335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.632 ms 00:25:24.865 [2024-12-05 19:43:43.613358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.865 [2024-12-05 19:43:43.627035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.865 [2024-12-05 19:43:43.627245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:24.865 [2024-12-05 19:43:43.627312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.630 ms 00:25:24.865 [2024-12-05 19:43:43.627345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.865 [2024-12-05 19:43:43.627542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.865 [2024-12-05 19:43:43.627556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:24.865 [2024-12-05 19:43:43.627565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:25:24.865 [2024-12-05 19:43:43.627573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.865 [2024-12-05 19:43:43.651816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.865 [2024-12-05 19:43:43.651999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:24.865 [2024-12-05 19:43:43.652052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.227 ms 00:25:24.865 [2024-12-05 19:43:43.652075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.865 [2024-12-05 19:43:43.675582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.865 [2024-12-05 19:43:43.675764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:24.865 [2024-12-05 19:43:43.675827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.462 ms 00:25:24.865 [2024-12-05 19:43:43.675849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.865 [2024-12-05 19:43:43.698912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.865 [2024-12-05 19:43:43.699098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:24.865 [2024-12-05 19:43:43.699188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.017 ms 00:25:24.865 [2024-12-05 19:43:43.699212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.865 [2024-12-05 19:43:43.722856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.865 [2024-12-05 19:43:43.723017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:24.865 [2024-12-05 19:43:43.723065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.570 ms 00:25:24.865 [2024-12-05 19:43:43.723087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.865 [2024-12-05 19:43:43.723139] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:24.865 [2024-12-05 19:43:43.723173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.723971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.724895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.725039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.725069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.725097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:24.865 [2024-12-05 19:43:43.725126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.725932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.726973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.727001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.727029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.727098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.727136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:24.866 [2024-12-05 19:43:43.727167] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:24.866 [2024-12-05 19:43:43.727176] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4ebe3b-0e39-46e8-b133-cc994bddeda9 00:25:24.866 [2024-12-05 19:43:43.727184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:24.866 [2024-12-05 19:43:43.727192] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:24.866 [2024-12-05 19:43:43.727199] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:24.866 [2024-12-05 19:43:43.727208] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:24.866 [2024-12-05 19:43:43.727225] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:24.866 [2024-12-05 19:43:43.727233] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:24.866 [2024-12-05 19:43:43.727240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:24.866 [2024-12-05 19:43:43.727247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:24.866 [2024-12-05 19:43:43.727254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:24.866 [2024-12-05 19:43:43.727263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.866 [2024-12-05 19:43:43.727272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:24.866 [2024-12-05 19:43:43.727284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.124 ms 00:25:24.866 [2024-12-05 19:43:43.727292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.866 [2024-12-05 19:43:43.739804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.866 [2024-12-05 19:43:43.739843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:24.866 [2024-12-05 19:43:43.739855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.482 ms 00:25:24.866 [2024-12-05 19:43:43.739863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.866 [2024-12-05 19:43:43.740260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:24.866 [2024-12-05 19:43:43.740281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:24.866 [2024-12-05 19:43:43.740290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:25:24.866 [2024-12-05 19:43:43.740297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.866 [2024-12-05 19:43:43.772606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.866 [2024-12-05 19:43:43.772809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:24.866 [2024-12-05 19:43:43.772826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.866 [2024-12-05 19:43:43.772834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.866 [2024-12-05 19:43:43.772899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.866 [2024-12-05 19:43:43.772912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:24.866 [2024-12-05 19:43:43.772920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.866 [2024-12-05 19:43:43.772927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.866 [2024-12-05 19:43:43.772998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.866 [2024-12-05 19:43:43.773008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:24.866 [2024-12-05 19:43:43.773015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.866 [2024-12-05 19:43:43.773023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.866 [2024-12-05 19:43:43.773037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.866 [2024-12-05 19:43:43.773045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:24.866 [2024-12-05 19:43:43.773054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.866 [2024-12-05 19:43:43.773061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:24.866 [2024-12-05 19:43:43.849616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:24.866 [2024-12-05 19:43:43.849662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:24.866 [2024-12-05 19:43:43.849673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:24.866 [2024-12-05 19:43:43.849681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.126 [2024-12-05 19:43:43.912537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.126 [2024-12-05 19:43:43.912584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:25.126 [2024-12-05 19:43:43.912602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.126 [2024-12-05 19:43:43.912610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.126 [2024-12-05 19:43:43.912680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.126 [2024-12-05 19:43:43.912690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:25.126 [2024-12-05 19:43:43.912698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.126 [2024-12-05 19:43:43.912705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.126 [2024-12-05 19:43:43.912737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.126 [2024-12-05 19:43:43.912745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:25.126 [2024-12-05 19:43:43.912753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.126 [2024-12-05 19:43:43.912762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.126 [2024-12-05 19:43:43.912845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.126 [2024-12-05 19:43:43.912854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:25.126 [2024-12-05 19:43:43.912861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.126 [2024-12-05 19:43:43.912869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.126 [2024-12-05 19:43:43.912896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.126 [2024-12-05 19:43:43.912904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:25.126 [2024-12-05 19:43:43.912912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.126 [2024-12-05 19:43:43.912918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.126 [2024-12-05 19:43:43.912954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.126 [2024-12-05 19:43:43.912963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:25.126 [2024-12-05 19:43:43.912970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.126 [2024-12-05 19:43:43.912977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.126 [2024-12-05 19:43:43.913015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:25.126 [2024-12-05 19:43:43.913024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:25.126 [2024-12-05 19:43:43.913032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:25.126 [2024-12-05 19:43:43.913041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:25.126 [2024-12-05 19:43:43.913175] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.400 ms, result 0 00:25:26.058 00:25:26.058 00:25:26.058 19:43:44 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:27.961 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:27.961 19:43:46 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:27.961 [2024-12-05 19:43:46.682091] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:25:27.961 [2024-12-05 19:43:46.682444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78005 ] 00:25:27.961 [2024-12-05 19:43:46.841661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.961 [2024-12-05 19:43:46.942541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.220 [2024-12-05 19:43:47.203686] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.220 [2024-12-05 19:43:47.203760] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:28.479 [2024-12-05 19:43:47.356744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.356982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:28.479 [2024-12-05 19:43:47.357003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:28.479 [2024-12-05 19:43:47.357011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.357073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.357085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:28.479 [2024-12-05 19:43:47.357094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:28.479 [2024-12-05 19:43:47.357101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.357120] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:28.479 [2024-12-05 19:43:47.357817] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:28.479 [2024-12-05 19:43:47.357833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.357840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:28.479 [2024-12-05 19:43:47.357849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.717 ms 00:25:28.479 [2024-12-05 19:43:47.357856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.358992] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:28.479 [2024-12-05 19:43:47.371435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.371638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:28.479 [2024-12-05 19:43:47.371657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.444 ms 00:25:28.479 [2024-12-05 19:43:47.371666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.371738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.371748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:28.479 [2024-12-05 19:43:47.371757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:28.479 [2024-12-05 19:43:47.371764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.377015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.377056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:28.479 [2024-12-05 19:43:47.377067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.174 ms 00:25:28.479 [2024-12-05 19:43:47.377079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.377169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.377179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:28.479 [2024-12-05 19:43:47.377188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:28.479 [2024-12-05 19:43:47.377195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.377246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.377255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:28.479 [2024-12-05 19:43:47.377263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:28.479 [2024-12-05 19:43:47.377270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.377296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:28.479 [2024-12-05 19:43:47.380816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.380848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:28.479 [2024-12-05 19:43:47.380860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.526 ms 00:25:28.479 [2024-12-05 19:43:47.380868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.380902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.479 [2024-12-05 19:43:47.380911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:28.479 [2024-12-05 19:43:47.380919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:28.479 [2024-12-05 19:43:47.380926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.479 [2024-12-05 19:43:47.380949] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:28.479 [2024-12-05 19:43:47.380967] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:28.479 [2024-12-05 19:43:47.381002] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:28.479 [2024-12-05 19:43:47.381024] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:28.479 [2024-12-05 19:43:47.381142] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:28.479 [2024-12-05 19:43:47.381153] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:28.479 [2024-12-05 19:43:47.381164] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:28.479 [2024-12-05 19:43:47.381173] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:28.479 [2024-12-05 19:43:47.381182] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:28.479 [2024-12-05 19:43:47.381190] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:28.480 [2024-12-05 19:43:47.381198] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:28.480 [2024-12-05 19:43:47.381207] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:28.480 [2024-12-05 19:43:47.381214] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:28.480 [2024-12-05 19:43:47.381222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.480 [2024-12-05 19:43:47.381229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:28.480 [2024-12-05 19:43:47.381236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:25:28.480 [2024-12-05 19:43:47.381244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.480 [2024-12-05 19:43:47.381326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.480 [2024-12-05 19:43:47.381334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:28.480 [2024-12-05 19:43:47.381342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:28.480 [2024-12-05 19:43:47.381349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.480 [2024-12-05 19:43:47.381470] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:28.480 [2024-12-05 19:43:47.381481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:28.480 [2024-12-05 19:43:47.381490] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.480 [2024-12-05 19:43:47.381497] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:28.480 [2024-12-05 19:43:47.381511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:28.480 [2024-12-05 19:43:47.381526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:28.480 [2024-12-05 19:43:47.381533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.480 [2024-12-05 19:43:47.381546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:28.480 [2024-12-05 19:43:47.381553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:28.480 [2024-12-05 19:43:47.381559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:28.480 [2024-12-05 19:43:47.381571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:28.480 [2024-12-05 19:43:47.381578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:28.480 [2024-12-05 19:43:47.381584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:28.480 [2024-12-05 19:43:47.381598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:28.480 [2024-12-05 19:43:47.381606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:28.480 [2024-12-05 19:43:47.381620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.480 [2024-12-05 19:43:47.381633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:28.480 [2024-12-05 19:43:47.381640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.480 [2024-12-05 19:43:47.381652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:28.480 [2024-12-05 19:43:47.381659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.480 [2024-12-05 19:43:47.381671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:28.480 [2024-12-05 19:43:47.381678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:28.480 [2024-12-05 19:43:47.381691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:28.480 [2024-12-05 19:43:47.381697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.480 [2024-12-05 19:43:47.381710] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:28.480 [2024-12-05 19:43:47.381716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:28.480 [2024-12-05 19:43:47.381722] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:28.480 [2024-12-05 19:43:47.381729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:28.480 [2024-12-05 19:43:47.381735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:28.480 [2024-12-05 19:43:47.381742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:28.480 [2024-12-05 19:43:47.381755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:28.480 [2024-12-05 19:43:47.381761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381767] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:28.480 [2024-12-05 19:43:47.381775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:28.480 [2024-12-05 19:43:47.381783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:28.480 [2024-12-05 19:43:47.381790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:28.480 [2024-12-05 19:43:47.381797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:28.480 [2024-12-05 19:43:47.381803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:28.480 [2024-12-05 19:43:47.381809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:28.480 [2024-12-05 19:43:47.381817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:28.480 [2024-12-05 19:43:47.381824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:28.480 [2024-12-05 19:43:47.381830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:28.480 [2024-12-05 19:43:47.381839] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:28.480 [2024-12-05 19:43:47.381848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.480 [2024-12-05 19:43:47.381860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:28.480 [2024-12-05 19:43:47.381868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:28.480 [2024-12-05 19:43:47.381874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:28.480 [2024-12-05 19:43:47.381881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:28.480 [2024-12-05 19:43:47.381888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:28.480 [2024-12-05 19:43:47.381895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:28.480 [2024-12-05 19:43:47.381902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:28.480 [2024-12-05 19:43:47.381909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:28.480 [2024-12-05 19:43:47.381916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:28.480 [2024-12-05 19:43:47.381923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:28.480 [2024-12-05 19:43:47.381930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:28.480 [2024-12-05 19:43:47.381937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:28.480 [2024-12-05 19:43:47.381954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:28.480 [2024-12-05 19:43:47.381962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:28.480 [2024-12-05 19:43:47.381969] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:28.480 [2024-12-05 19:43:47.381977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:28.480 [2024-12-05 19:43:47.381985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:28.480 [2024-12-05 19:43:47.381993] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:28.480 [2024-12-05 19:43:47.382000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:28.480 [2024-12-05 19:43:47.382007] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:28.480 [2024-12-05 19:43:47.382015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.480 [2024-12-05 19:43:47.382022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:28.480 [2024-12-05 19:43:47.382030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.613 ms 00:25:28.480 [2024-12-05 19:43:47.382036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.480 [2024-12-05 19:43:47.408093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.480 [2024-12-05 19:43:47.408163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:28.480 [2024-12-05 19:43:47.408175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.001 ms 00:25:28.480 [2024-12-05 19:43:47.408187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.480 [2024-12-05 19:43:47.408283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.480 [2024-12-05 19:43:47.408291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:28.480 [2024-12-05 19:43:47.408299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:28.480 [2024-12-05 19:43:47.408306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.480 [2024-12-05 19:43:47.452875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.480 [2024-12-05 19:43:47.453099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:28.480 [2024-12-05 19:43:47.453120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.501 ms 00:25:28.481 [2024-12-05 19:43:47.453146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.481 [2024-12-05 19:43:47.453205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.481 [2024-12-05 19:43:47.453215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:28.481 [2024-12-05 19:43:47.453229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:28.481 [2024-12-05 19:43:47.453236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.481 [2024-12-05 19:43:47.453634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.481 [2024-12-05 19:43:47.453651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:28.481 [2024-12-05 19:43:47.453660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:25:28.481 [2024-12-05 19:43:47.453668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.481 [2024-12-05 19:43:47.453798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.481 [2024-12-05 19:43:47.453807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:28.481 [2024-12-05 19:43:47.453819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:25:28.481 [2024-12-05 19:43:47.453827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.481 [2024-12-05 19:43:47.467109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.481 [2024-12-05 19:43:47.467172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:28.481 [2024-12-05 19:43:47.467185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.262 ms 00:25:28.481 [2024-12-05 19:43:47.467193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.481 [2024-12-05 19:43:47.479847] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:28.481 [2024-12-05 19:43:47.479905] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:28.481 [2024-12-05 19:43:47.479918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.481 [2024-12-05 19:43:47.479926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:28.481 [2024-12-05 19:43:47.479937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.603 ms 00:25:28.481 [2024-12-05 19:43:47.479944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.505205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.505251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:28.739 [2024-12-05 19:43:47.505265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.196 ms 00:25:28.739 [2024-12-05 19:43:47.505274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.517746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.517794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:28.739 [2024-12-05 19:43:47.517807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.396 ms 00:25:28.739 [2024-12-05 19:43:47.517814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.529702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.529929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:28.739 [2024-12-05 19:43:47.529957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.839 ms 00:25:28.739 [2024-12-05 19:43:47.529965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.530620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.530643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:28.739 [2024-12-05 19:43:47.530655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:25:28.739 [2024-12-05 19:43:47.530662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.587164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.587232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:28.739 [2024-12-05 19:43:47.587256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.482 ms 00:25:28.739 [2024-12-05 19:43:47.587264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.598453] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:28.739 [2024-12-05 19:43:47.601269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.601307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:28.739 [2024-12-05 19:43:47.601321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.938 ms 00:25:28.739 [2024-12-05 19:43:47.601329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.601443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.601454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:28.739 [2024-12-05 19:43:47.601465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:28.739 [2024-12-05 19:43:47.601473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.601539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.601550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:28.739 [2024-12-05 19:43:47.601558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:28.739 [2024-12-05 19:43:47.601565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.601584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.601592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:28.739 [2024-12-05 19:43:47.601599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:28.739 [2024-12-05 19:43:47.601606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.601639] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:28.739 [2024-12-05 19:43:47.601649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.601656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:28.739 [2024-12-05 19:43:47.601664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:28.739 [2024-12-05 19:43:47.601671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.625913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.625980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:28.739 [2024-12-05 19:43:47.626000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.222 ms 00:25:28.739 [2024-12-05 19:43:47.626008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.626102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.739 [2024-12-05 19:43:47.626113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:28.739 [2024-12-05 19:43:47.626121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:28.739 [2024-12-05 19:43:47.626147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.739 [2024-12-05 19:43:47.627618] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.444 ms, result 0 00:25:29.673  [2024-12-05T19:43:50.112Z] Copying: 43/1024 [MB] (43 MBps) [2024-12-05T19:43:50.678Z] Copying: 89/1024 [MB] (45 MBps) [2024-12-05T19:43:52.050Z] Copying: 135/1024 [MB] (46 MBps) [2024-12-05T19:43:52.983Z] Copying: 179/1024 [MB] (43 MBps) [2024-12-05T19:43:53.919Z] Copying: 218/1024 [MB] (39 MBps) [2024-12-05T19:43:54.851Z] Copying: 261/1024 [MB] (42 MBps) [2024-12-05T19:43:55.784Z] Copying: 304/1024 [MB] (43 MBps) [2024-12-05T19:43:56.740Z] Copying: 351/1024 [MB] (46 MBps) [2024-12-05T19:43:57.674Z] Copying: 394/1024 [MB] (43 MBps) [2024-12-05T19:43:59.048Z] Copying: 434/1024 [MB] (39 MBps) [2024-12-05T19:43:59.981Z] Copying: 479/1024 [MB] (45 MBps) [2024-12-05T19:44:00.919Z] Copying: 524/1024 [MB] (45 MBps) [2024-12-05T19:44:01.851Z] Copying: 566/1024 [MB] (42 MBps) [2024-12-05T19:44:02.807Z] Copying: 588/1024 [MB] (21 MBps) [2024-12-05T19:44:03.740Z] Copying: 624/1024 [MB] (36 MBps) [2024-12-05T19:44:04.671Z] Copying: 669/1024 [MB] (44 MBps) [2024-12-05T19:44:06.042Z] Copying: 713/1024 [MB] (43 MBps) [2024-12-05T19:44:06.977Z] Copying: 757/1024 [MB] (44 MBps) [2024-12-05T19:44:07.918Z] Copying: 802/1024 [MB] (44 MBps) [2024-12-05T19:44:08.851Z] Copying: 846/1024 [MB] (43 MBps) [2024-12-05T19:44:09.788Z] Copying: 891/1024 [MB] (44 MBps) [2024-12-05T19:44:10.724Z] Copying: 935/1024 [MB] (44 MBps) [2024-12-05T19:44:11.662Z] Copying: 969/1024 [MB] (34 MBps) [2024-12-05T19:44:13.041Z] Copying: 1000/1024 [MB] (30 MBps) [2024-12-05T19:44:13.614Z] Copying: 1023/1024 [MB] (22 MBps) [2024-12-05T19:44:13.614Z] Copying: 1024/1024 [MB] (average 39 MBps)[2024-12-05 19:44:13.473883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.608 [2024-12-05 19:44:13.473950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:54.608 [2024-12-05 19:44:13.473975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:54.608 [2024-12-05 19:44:13.473983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.608 [2024-12-05 19:44:13.475852] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:54.608 [2024-12-05 19:44:13.480973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.608 [2024-12-05 19:44:13.481007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:54.608 [2024-12-05 19:44:13.481020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.082 ms 00:25:54.608 [2024-12-05 19:44:13.481028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.608 [2024-12-05 19:44:13.491582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.608 [2024-12-05 19:44:13.491622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:54.608 [2024-12-05 19:44:13.491633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.168 ms 00:25:54.608 [2024-12-05 19:44:13.491650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.608 [2024-12-05 19:44:13.512492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.608 [2024-12-05 19:44:13.512544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:54.608 [2024-12-05 19:44:13.512558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.825 ms 00:25:54.608 [2024-12-05 19:44:13.512566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.608 [2024-12-05 19:44:13.518754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.608 [2024-12-05 19:44:13.518804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:54.608 [2024-12-05 19:44:13.518821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.156 ms 00:25:54.608 [2024-12-05 19:44:13.518847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.608 [2024-12-05 19:44:13.544632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.608 [2024-12-05 19:44:13.544684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:54.608 [2024-12-05 19:44:13.544697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.721 ms 00:25:54.608 [2024-12-05 19:44:13.544706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.608 [2024-12-05 19:44:13.559525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.608 [2024-12-05 19:44:13.559583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:54.608 [2024-12-05 19:44:13.559597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.772 ms 00:25:54.608 [2024-12-05 19:44:13.559604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.608 [2024-12-05 19:44:13.612708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.608 [2024-12-05 19:44:13.612760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:54.608 [2024-12-05 19:44:13.612773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.057 ms 00:25:54.608 [2024-12-05 19:44:13.612781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.870 [2024-12-05 19:44:13.637461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.870 [2024-12-05 19:44:13.637509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:54.870 [2024-12-05 19:44:13.637522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.664 ms 00:25:54.870 [2024-12-05 19:44:13.637529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.870 [2024-12-05 19:44:13.661657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.870 [2024-12-05 19:44:13.661703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:54.870 [2024-12-05 19:44:13.661715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.081 ms 00:25:54.870 [2024-12-05 19:44:13.661723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.870 [2024-12-05 19:44:13.684844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.870 [2024-12-05 19:44:13.684888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:54.870 [2024-12-05 19:44:13.684900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.073 ms 00:25:54.870 [2024-12-05 19:44:13.684908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.870 [2024-12-05 19:44:13.708725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.870 [2024-12-05 19:44:13.708768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:54.870 [2024-12-05 19:44:13.708779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.743 ms 00:25:54.870 [2024-12-05 19:44:13.708787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.870 [2024-12-05 19:44:13.708836] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:54.870 [2024-12-05 19:44:13.708851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 121856 / 261120 wr_cnt: 1 state: open 00:25:54.870 [2024-12-05 19:44:13.708861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.708994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.709001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.709008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.709015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.709022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.709029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:54.870 [2024-12-05 19:44:13.709036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:54.871 [2024-12-05 19:44:13.709624] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:54.871 [2024-12-05 19:44:13.709631] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4ebe3b-0e39-46e8-b133-cc994bddeda9 00:25:54.871 [2024-12-05 19:44:13.709639] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 121856 00:25:54.871 [2024-12-05 19:44:13.709645] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 122816 00:25:54.871 [2024-12-05 19:44:13.709652] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 121856 00:25:54.871 [2024-12-05 19:44:13.709660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:25:54.871 [2024-12-05 19:44:13.709680] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:54.871 [2024-12-05 19:44:13.709687] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:54.871 [2024-12-05 19:44:13.709695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:54.871 [2024-12-05 19:44:13.709701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:54.871 [2024-12-05 19:44:13.709707] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:54.871 [2024-12-05 19:44:13.709714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.871 [2024-12-05 19:44:13.709722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:54.871 [2024-12-05 19:44:13.709730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:25:54.871 [2024-12-05 19:44:13.709737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.871 [2024-12-05 19:44:13.721855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.871 [2024-12-05 19:44:13.721895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:54.871 [2024-12-05 19:44:13.721915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.097 ms 00:25:54.871 [2024-12-05 19:44:13.721924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.871 [2024-12-05 19:44:13.722321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:54.871 [2024-12-05 19:44:13.722331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:54.871 [2024-12-05 19:44:13.722340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:25:54.871 [2024-12-05 19:44:13.722347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.871 [2024-12-05 19:44:13.754535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.871 [2024-12-05 19:44:13.754581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:54.871 [2024-12-05 19:44:13.754591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.871 [2024-12-05 19:44:13.754599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.871 [2024-12-05 19:44:13.754664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.871 [2024-12-05 19:44:13.754672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:54.871 [2024-12-05 19:44:13.754680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.871 [2024-12-05 19:44:13.754687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.871 [2024-12-05 19:44:13.754747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.871 [2024-12-05 19:44:13.754760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:54.871 [2024-12-05 19:44:13.754767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.871 [2024-12-05 19:44:13.754774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.871 [2024-12-05 19:44:13.754789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.871 [2024-12-05 19:44:13.754796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:54.871 [2024-12-05 19:44:13.754804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.871 [2024-12-05 19:44:13.754810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:54.871 [2024-12-05 19:44:13.830447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:54.871 [2024-12-05 19:44:13.830495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:54.871 [2024-12-05 19:44:13.830506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:54.871 [2024-12-05 19:44:13.830514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.132 [2024-12-05 19:44:13.893275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.132 [2024-12-05 19:44:13.893323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:55.132 [2024-12-05 19:44:13.893334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.132 [2024-12-05 19:44:13.893342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.132 [2024-12-05 19:44:13.893405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.132 [2024-12-05 19:44:13.893414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:55.132 [2024-12-05 19:44:13.893421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.132 [2024-12-05 19:44:13.893434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.132 [2024-12-05 19:44:13.893468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.132 [2024-12-05 19:44:13.893476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:55.132 [2024-12-05 19:44:13.893484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.132 [2024-12-05 19:44:13.893491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.132 [2024-12-05 19:44:13.893574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.132 [2024-12-05 19:44:13.893583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:55.132 [2024-12-05 19:44:13.893591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.132 [2024-12-05 19:44:13.893601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.132 [2024-12-05 19:44:13.893632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.132 [2024-12-05 19:44:13.893641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:55.132 [2024-12-05 19:44:13.893649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.132 [2024-12-05 19:44:13.893656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.132 [2024-12-05 19:44:13.893687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.132 [2024-12-05 19:44:13.893697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:55.132 [2024-12-05 19:44:13.893705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.132 [2024-12-05 19:44:13.893712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.132 [2024-12-05 19:44:13.893751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:55.132 [2024-12-05 19:44:13.893760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:55.132 [2024-12-05 19:44:13.893768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:55.132 [2024-12-05 19:44:13.893774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:55.132 [2024-12-05 19:44:13.893882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.921 ms, result 0 00:25:57.038 00:25:57.038 00:25:57.038 19:44:15 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:57.038 [2024-12-05 19:44:15.786904] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:25:57.038 [2024-12-05 19:44:15.787026] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78296 ] 00:25:57.038 [2024-12-05 19:44:15.946941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.296 [2024-12-05 19:44:16.046042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.554 [2024-12-05 19:44:16.303822] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.554 [2024-12-05 19:44:16.303885] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:57.554 [2024-12-05 19:44:16.456901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.554 [2024-12-05 19:44:16.456957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:57.554 [2024-12-05 19:44:16.456970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:57.554 [2024-12-05 19:44:16.456978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.554 [2024-12-05 19:44:16.457028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.554 [2024-12-05 19:44:16.457040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:57.554 [2024-12-05 19:44:16.457049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:57.554 [2024-12-05 19:44:16.457056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.554 [2024-12-05 19:44:16.457075] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:57.554 [2024-12-05 19:44:16.457856] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:57.555 [2024-12-05 19:44:16.457884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.457892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:57.555 [2024-12-05 19:44:16.457901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.814 ms 00:25:57.555 [2024-12-05 19:44:16.457908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.458975] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:57.555 [2024-12-05 19:44:16.471151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.471204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:57.555 [2024-12-05 19:44:16.471216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.176 ms 00:25:57.555 [2024-12-05 19:44:16.471225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.471298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.471307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:57.555 [2024-12-05 19:44:16.471316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:57.555 [2024-12-05 19:44:16.471323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.476491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.476529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:57.555 [2024-12-05 19:44:16.476539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.094 ms 00:25:57.555 [2024-12-05 19:44:16.476552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.476630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.476638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:57.555 [2024-12-05 19:44:16.476646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:57.555 [2024-12-05 19:44:16.476654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.476704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.476714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:57.555 [2024-12-05 19:44:16.476722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:57.555 [2024-12-05 19:44:16.476729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.476755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:57.555 [2024-12-05 19:44:16.480157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.480183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:57.555 [2024-12-05 19:44:16.480195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.409 ms 00:25:57.555 [2024-12-05 19:44:16.480202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.480240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.480249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:57.555 [2024-12-05 19:44:16.480257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:57.555 [2024-12-05 19:44:16.480264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.480284] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:57.555 [2024-12-05 19:44:16.480302] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:57.555 [2024-12-05 19:44:16.480337] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:57.555 [2024-12-05 19:44:16.480354] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:57.555 [2024-12-05 19:44:16.480457] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:57.555 [2024-12-05 19:44:16.480467] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:57.555 [2024-12-05 19:44:16.480477] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:57.555 [2024-12-05 19:44:16.480487] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:57.555 [2024-12-05 19:44:16.480496] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:57.555 [2024-12-05 19:44:16.480503] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:57.555 [2024-12-05 19:44:16.480511] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:57.555 [2024-12-05 19:44:16.480520] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:57.555 [2024-12-05 19:44:16.480528] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:57.555 [2024-12-05 19:44:16.480535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.480542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:57.555 [2024-12-05 19:44:16.480549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:25:57.555 [2024-12-05 19:44:16.480557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.480639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.480646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:57.555 [2024-12-05 19:44:16.480654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:57.555 [2024-12-05 19:44:16.480660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.480779] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:57.555 [2024-12-05 19:44:16.480790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:57.555 [2024-12-05 19:44:16.480798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.555 [2024-12-05 19:44:16.480805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.555 [2024-12-05 19:44:16.480813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:57.555 [2024-12-05 19:44:16.480819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:57.555 [2024-12-05 19:44:16.480826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:57.555 [2024-12-05 19:44:16.480834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:57.555 [2024-12-05 19:44:16.480841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:57.555 [2024-12-05 19:44:16.480847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.555 [2024-12-05 19:44:16.480854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:57.555 [2024-12-05 19:44:16.480860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:57.555 [2024-12-05 19:44:16.480866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:57.555 [2024-12-05 19:44:16.480879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:57.555 [2024-12-05 19:44:16.480885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:57.555 [2024-12-05 19:44:16.480892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.555 [2024-12-05 19:44:16.480900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:57.555 [2024-12-05 19:44:16.480906] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:57.555 [2024-12-05 19:44:16.480913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.555 [2024-12-05 19:44:16.480919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:57.555 [2024-12-05 19:44:16.480927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:57.555 [2024-12-05 19:44:16.480934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.555 [2024-12-05 19:44:16.480940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:57.555 [2024-12-05 19:44:16.480947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:57.555 [2024-12-05 19:44:16.480953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.555 [2024-12-05 19:44:16.480960] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:57.555 [2024-12-05 19:44:16.480967] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:57.555 [2024-12-05 19:44:16.480973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.555 [2024-12-05 19:44:16.480979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:57.555 [2024-12-05 19:44:16.480986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:57.555 [2024-12-05 19:44:16.480992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:57.555 [2024-12-05 19:44:16.480999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:57.555 [2024-12-05 19:44:16.481006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:57.555 [2024-12-05 19:44:16.481012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.555 [2024-12-05 19:44:16.481018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:57.555 [2024-12-05 19:44:16.481025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:57.555 [2024-12-05 19:44:16.481031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:57.555 [2024-12-05 19:44:16.481037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:57.555 [2024-12-05 19:44:16.481044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:57.555 [2024-12-05 19:44:16.481050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.555 [2024-12-05 19:44:16.481056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:57.555 [2024-12-05 19:44:16.481063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:57.555 [2024-12-05 19:44:16.481069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.555 [2024-12-05 19:44:16.481075] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:57.555 [2024-12-05 19:44:16.481083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:57.555 [2024-12-05 19:44:16.481089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:57.555 [2024-12-05 19:44:16.481096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:57.555 [2024-12-05 19:44:16.481103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:57.555 [2024-12-05 19:44:16.481111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:57.555 [2024-12-05 19:44:16.481117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:57.555 [2024-12-05 19:44:16.481124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:57.555 [2024-12-05 19:44:16.481142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:57.555 [2024-12-05 19:44:16.481149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:57.555 [2024-12-05 19:44:16.481156] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:57.555 [2024-12-05 19:44:16.481165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.555 [2024-12-05 19:44:16.481176] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:57.555 [2024-12-05 19:44:16.481184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:57.555 [2024-12-05 19:44:16.481191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:57.555 [2024-12-05 19:44:16.481198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:57.555 [2024-12-05 19:44:16.481205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:57.555 [2024-12-05 19:44:16.481212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:57.555 [2024-12-05 19:44:16.481219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:57.555 [2024-12-05 19:44:16.481226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:57.555 [2024-12-05 19:44:16.481233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:57.555 [2024-12-05 19:44:16.481240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:57.555 [2024-12-05 19:44:16.481247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:57.555 [2024-12-05 19:44:16.481254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:57.555 [2024-12-05 19:44:16.481261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:57.555 [2024-12-05 19:44:16.481269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:57.555 [2024-12-05 19:44:16.481276] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:57.555 [2024-12-05 19:44:16.481284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:57.555 [2024-12-05 19:44:16.481292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:57.555 [2024-12-05 19:44:16.481300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:57.555 [2024-12-05 19:44:16.481307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:57.555 [2024-12-05 19:44:16.481314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:57.555 [2024-12-05 19:44:16.481322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.481328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:57.555 [2024-12-05 19:44:16.481336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:25:57.555 [2024-12-05 19:44:16.481343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.506900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.506939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:57.555 [2024-12-05 19:44:16.506950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.503 ms 00:25:57.555 [2024-12-05 19:44:16.506961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.507053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.507061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:57.555 [2024-12-05 19:44:16.507070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:57.555 [2024-12-05 19:44:16.507077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.554196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.554244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:57.555 [2024-12-05 19:44:16.554257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.039 ms 00:25:57.555 [2024-12-05 19:44:16.554266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.554324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.554334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:57.555 [2024-12-05 19:44:16.554346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:57.555 [2024-12-05 19:44:16.554353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.554740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.554764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:57.555 [2024-12-05 19:44:16.554774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:25:57.555 [2024-12-05 19:44:16.554782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.555 [2024-12-05 19:44:16.554914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.555 [2024-12-05 19:44:16.554923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:57.555 [2024-12-05 19:44:16.554936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:57.555 [2024-12-05 19:44:16.554943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.813 [2024-12-05 19:44:16.567883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.813 [2024-12-05 19:44:16.567920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:57.813 [2024-12-05 19:44:16.567932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.921 ms 00:25:57.813 [2024-12-05 19:44:16.567940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.813 [2024-12-05 19:44:16.580259] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:57.813 [2024-12-05 19:44:16.580304] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:57.813 [2024-12-05 19:44:16.580316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.813 [2024-12-05 19:44:16.580324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:57.813 [2024-12-05 19:44:16.580334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.266 ms 00:25:57.813 [2024-12-05 19:44:16.580342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.813 [2024-12-05 19:44:16.604971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.813 [2024-12-05 19:44:16.605026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:57.813 [2024-12-05 19:44:16.605039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.566 ms 00:25:57.813 [2024-12-05 19:44:16.605047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.813 [2024-12-05 19:44:16.617471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.813 [2024-12-05 19:44:16.617514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:57.813 [2024-12-05 19:44:16.617525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.350 ms 00:25:57.813 [2024-12-05 19:44:16.617533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.813 [2024-12-05 19:44:16.628843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.813 [2024-12-05 19:44:16.628882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:57.813 [2024-12-05 19:44:16.628894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.267 ms 00:25:57.813 [2024-12-05 19:44:16.628901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.813 [2024-12-05 19:44:16.629547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.813 [2024-12-05 19:44:16.629566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:57.814 [2024-12-05 19:44:16.629578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.537 ms 00:25:57.814 [2024-12-05 19:44:16.629585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.814 [2024-12-05 19:44:16.684524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.814 [2024-12-05 19:44:16.684580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:57.814 [2024-12-05 19:44:16.684601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.921 ms 00:25:57.814 [2024-12-05 19:44:16.684610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.814 [2024-12-05 19:44:16.695374] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:57.814 [2024-12-05 19:44:16.698123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.814 [2024-12-05 19:44:16.698169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:57.814 [2024-12-05 19:44:16.698182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.457 ms 00:25:57.814 [2024-12-05 19:44:16.698191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.814 [2024-12-05 19:44:16.698301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.814 [2024-12-05 19:44:16.698313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:57.814 [2024-12-05 19:44:16.698324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:57.814 [2024-12-05 19:44:16.698331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.814 [2024-12-05 19:44:16.699722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.814 [2024-12-05 19:44:16.699756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:57.814 [2024-12-05 19:44:16.699767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:25:57.814 [2024-12-05 19:44:16.699774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.814 [2024-12-05 19:44:16.699802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.814 [2024-12-05 19:44:16.699811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:57.814 [2024-12-05 19:44:16.699819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:57.814 [2024-12-05 19:44:16.699827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.814 [2024-12-05 19:44:16.699864] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:57.814 [2024-12-05 19:44:16.699873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.814 [2024-12-05 19:44:16.699881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:57.814 [2024-12-05 19:44:16.699888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:57.814 [2024-12-05 19:44:16.699895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.814 [2024-12-05 19:44:16.724359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.814 [2024-12-05 19:44:16.724417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:57.814 [2024-12-05 19:44:16.724438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.444 ms 00:25:57.814 [2024-12-05 19:44:16.724448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.814 [2024-12-05 19:44:16.724547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:57.814 [2024-12-05 19:44:16.724558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:57.814 [2024-12-05 19:44:16.724567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:57.814 [2024-12-05 19:44:16.724575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:57.814 [2024-12-05 19:44:16.725540] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 268.207 ms, result 0 00:25:59.188  [2024-12-05T19:44:19.127Z] Copying: 36/1024 [MB] (36 MBps) [2024-12-05T19:44:20.069Z] Copying: 65/1024 [MB] (28 MBps) [2024-12-05T19:44:21.020Z] Copying: 80/1024 [MB] (15 MBps) [2024-12-05T19:44:22.031Z] Copying: 107/1024 [MB] (26 MBps) [2024-12-05T19:44:22.974Z] Copying: 122/1024 [MB] (15 MBps) [2024-12-05T19:44:23.915Z] Copying: 135/1024 [MB] (12 MBps) [2024-12-05T19:44:25.302Z] Copying: 153/1024 [MB] (17 MBps) [2024-12-05T19:44:26.245Z] Copying: 170/1024 [MB] (17 MBps) [2024-12-05T19:44:27.215Z] Copying: 187/1024 [MB] (16 MBps) [2024-12-05T19:44:28.179Z] Copying: 200/1024 [MB] (13 MBps) [2024-12-05T19:44:29.123Z] Copying: 212/1024 [MB] (12 MBps) [2024-12-05T19:44:30.067Z] Copying: 227484/1048576 [kB] (9796 kBps) [2024-12-05T19:44:31.012Z] Copying: 237672/1048576 [kB] (10188 kBps) [2024-12-05T19:44:31.957Z] Copying: 242/1024 [MB] (10 MBps) [2024-12-05T19:44:33.343Z] Copying: 252/1024 [MB] (10 MBps) [2024-12-05T19:44:33.914Z] Copying: 268864/1048576 [kB] (10128 kBps) [2024-12-05T19:44:35.299Z] Copying: 272/1024 [MB] (10 MBps) [2024-12-05T19:44:36.232Z] Copying: 283/1024 [MB] (10 MBps) [2024-12-05T19:44:37.160Z] Copying: 293/1024 [MB] (10 MBps) [2024-12-05T19:44:38.094Z] Copying: 304/1024 [MB] (11 MBps) [2024-12-05T19:44:39.085Z] Copying: 315/1024 [MB] (10 MBps) [2024-12-05T19:44:40.026Z] Copying: 325/1024 [MB] (10 MBps) [2024-12-05T19:44:40.959Z] Copying: 336/1024 [MB] (10 MBps) [2024-12-05T19:44:42.336Z] Copying: 346/1024 [MB] (10 MBps) [2024-12-05T19:44:43.274Z] Copying: 358/1024 [MB] (11 MBps) [2024-12-05T19:44:44.213Z] Copying: 368/1024 [MB] (10 MBps) [2024-12-05T19:44:45.190Z] Copying: 379/1024 [MB] (10 MBps) [2024-12-05T19:44:46.129Z] Copying: 390/1024 [MB] (10 MBps) [2024-12-05T19:44:47.068Z] Copying: 400/1024 [MB] (10 MBps) [2024-12-05T19:44:48.007Z] Copying: 411/1024 [MB] (10 MBps) [2024-12-05T19:44:48.946Z] Copying: 421/1024 [MB] (10 MBps) [2024-12-05T19:44:50.324Z] Copying: 432/1024 [MB] (11 MBps) [2024-12-05T19:44:51.261Z] Copying: 443/1024 [MB] (10 MBps) [2024-12-05T19:44:52.198Z] Copying: 455/1024 [MB] (12 MBps) [2024-12-05T19:44:53.129Z] Copying: 476388/1048576 [kB] (9948 kBps) [2024-12-05T19:44:54.061Z] Copying: 475/1024 [MB] (10 MBps) [2024-12-05T19:44:54.995Z] Copying: 486/1024 [MB] (10 MBps) [2024-12-05T19:44:55.940Z] Copying: 499/1024 [MB] (13 MBps) [2024-12-05T19:44:57.317Z] Copying: 509/1024 [MB] (10 MBps) [2024-12-05T19:44:58.251Z] Copying: 520/1024 [MB] (10 MBps) [2024-12-05T19:44:59.186Z] Copying: 531/1024 [MB] (11 MBps) [2024-12-05T19:45:00.120Z] Copying: 542/1024 [MB] (10 MBps) [2024-12-05T19:45:01.052Z] Copying: 553/1024 [MB] (11 MBps) [2024-12-05T19:45:02.039Z] Copying: 564/1024 [MB] (10 MBps) [2024-12-05T19:45:02.973Z] Copying: 579/1024 [MB] (15 MBps) [2024-12-05T19:45:04.346Z] Copying: 590/1024 [MB] (10 MBps) [2024-12-05T19:45:05.277Z] Copying: 600/1024 [MB] (10 MBps) [2024-12-05T19:45:06.208Z] Copying: 610/1024 [MB] (10 MBps) [2024-12-05T19:45:07.137Z] Copying: 620/1024 [MB] (10 MBps) [2024-12-05T19:45:08.125Z] Copying: 645924/1048576 [kB] (10192 kBps) [2024-12-05T19:45:09.058Z] Copying: 655736/1048576 [kB] (9812 kBps) [2024-12-05T19:45:10.006Z] Copying: 665536/1048576 [kB] (9800 kBps) [2024-12-05T19:45:10.941Z] Copying: 660/1024 [MB] (10 MBps) [2024-12-05T19:45:12.315Z] Copying: 671/1024 [MB] (10 MBps) [2024-12-05T19:45:13.248Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-05T19:45:14.186Z] Copying: 707584/1048576 [kB] (9584 kBps) [2024-12-05T19:45:15.121Z] Copying: 717464/1048576 [kB] (9880 kBps) [2024-12-05T19:45:16.052Z] Copying: 727448/1048576 [kB] (9984 kBps) [2024-12-05T19:45:16.986Z] Copying: 721/1024 [MB] (10 MBps) [2024-12-05T19:45:17.927Z] Copying: 748504/1048576 [kB] (10124 kBps) [2024-12-05T19:45:19.302Z] Copying: 746/1024 [MB] (15 MBps) [2024-12-05T19:45:20.237Z] Copying: 757/1024 [MB] (11 MBps) [2024-12-05T19:45:21.220Z] Copying: 767/1024 [MB] (10 MBps) [2024-12-05T19:45:22.153Z] Copying: 778/1024 [MB] (10 MBps) [2024-12-05T19:45:23.090Z] Copying: 789/1024 [MB] (10 MBps) [2024-12-05T19:45:24.023Z] Copying: 800/1024 [MB] (11 MBps) [2024-12-05T19:45:24.998Z] Copying: 810/1024 [MB] (10 MBps) [2024-12-05T19:45:25.936Z] Copying: 821/1024 [MB] (10 MBps) [2024-12-05T19:45:27.314Z] Copying: 832/1024 [MB] (10 MBps) [2024-12-05T19:45:28.251Z] Copying: 842/1024 [MB] (10 MBps) [2024-12-05T19:45:29.213Z] Copying: 872576/1048576 [kB] (10196 kBps) [2024-12-05T19:45:30.147Z] Copying: 882720/1048576 [kB] (10144 kBps) [2024-12-05T19:45:31.080Z] Copying: 872/1024 [MB] (10 MBps) [2024-12-05T19:45:32.013Z] Copying: 883/1024 [MB] (10 MBps) [2024-12-05T19:45:32.952Z] Copying: 896/1024 [MB] (13 MBps) [2024-12-05T19:45:34.335Z] Copying: 907/1024 [MB] (10 MBps) [2024-12-05T19:45:35.272Z] Copying: 918/1024 [MB] (11 MBps) [2024-12-05T19:45:36.203Z] Copying: 928/1024 [MB] (10 MBps) [2024-12-05T19:45:37.137Z] Copying: 938/1024 [MB] (10 MBps) [2024-12-05T19:45:38.070Z] Copying: 950/1024 [MB] (11 MBps) [2024-12-05T19:45:39.003Z] Copying: 982920/1048576 [kB] (9784 kBps) [2024-12-05T19:45:39.936Z] Copying: 992712/1048576 [kB] (9792 kBps) [2024-12-05T19:45:41.310Z] Copying: 1002680/1048576 [kB] (9968 kBps) [2024-12-05T19:45:42.241Z] Copying: 989/1024 [MB] (10 MBps) [2024-12-05T19:45:43.175Z] Copying: 1002/1024 [MB] (12 MBps) [2024-12-05T19:45:44.107Z] Copying: 1013/1024 [MB] (10 MBps) [2024-12-05T19:45:44.107Z] Copying: 1024/1024 [MB] (average 11 MBps)[2024-12-05 19:45:43.948234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.101 [2024-12-05 19:45:43.948300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:25.101 [2024-12-05 19:45:43.948324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:25.101 [2024-12-05 19:45:43.948334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.101 [2024-12-05 19:45:43.948357] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:25.101 [2024-12-05 19:45:43.951635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.101 [2024-12-05 19:45:43.951672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:25.101 [2024-12-05 19:45:43.951682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.261 ms 00:27:25.101 [2024-12-05 19:45:43.951690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.101 [2024-12-05 19:45:43.951932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.101 [2024-12-05 19:45:43.951948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:25.101 [2024-12-05 19:45:43.951958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:27:25.101 [2024-12-05 19:45:43.951971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.101 [2024-12-05 19:45:43.956800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.101 [2024-12-05 19:45:43.956837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:25.101 [2024-12-05 19:45:43.956847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.812 ms 00:27:25.101 [2024-12-05 19:45:43.956856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.101 [2024-12-05 19:45:43.963270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.101 [2024-12-05 19:45:43.963303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:25.101 [2024-12-05 19:45:43.963313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.379 ms 00:27:25.101 [2024-12-05 19:45:43.963327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.101 [2024-12-05 19:45:43.987761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.101 [2024-12-05 19:45:43.987800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:25.101 [2024-12-05 19:45:43.987811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.371 ms 00:27:25.101 [2024-12-05 19:45:43.987819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.101 [2024-12-05 19:45:44.002091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.101 [2024-12-05 19:45:44.002139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:25.101 [2024-12-05 19:45:44.002152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.238 ms 00:27:25.101 [2024-12-05 19:45:44.002159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.393 [2024-12-05 19:45:44.267767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.393 [2024-12-05 19:45:44.267829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.393 [2024-12-05 19:45:44.267844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 265.566 ms 00:27:25.393 [2024-12-05 19:45:44.267854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.393 [2024-12-05 19:45:44.292464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.393 [2024-12-05 19:45:44.292509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:25.393 [2024-12-05 19:45:44.292521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.592 ms 00:27:25.393 [2024-12-05 19:45:44.292528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.393 [2024-12-05 19:45:44.315876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.393 [2024-12-05 19:45:44.315914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:25.393 [2024-12-05 19:45:44.315926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.311 ms 00:27:25.393 [2024-12-05 19:45:44.315934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.393 [2024-12-05 19:45:44.339027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.393 [2024-12-05 19:45:44.339069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.393 [2024-12-05 19:45:44.339080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.060 ms 00:27:25.393 [2024-12-05 19:45:44.339087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.393 [2024-12-05 19:45:44.361650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.393 [2024-12-05 19:45:44.361684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.393 [2024-12-05 19:45:44.361694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.493 ms 00:27:25.393 [2024-12-05 19:45:44.361701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.393 [2024-12-05 19:45:44.361733] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.393 [2024-12-05 19:45:44.361746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:25.393 [2024-12-05 19:45:44.361756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.361995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.362002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.393 [2024-12-05 19:45:44.362009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.394 [2024-12-05 19:45:44.362509] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.394 [2024-12-05 19:45:44.362516] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b4ebe3b-0e39-46e8-b133-cc994bddeda9 00:27:25.394 [2024-12-05 19:45:44.362524] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:25.394 [2024-12-05 19:45:44.362531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 10176 00:27:25.394 [2024-12-05 19:45:44.362539] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 9216 00:27:25.394 [2024-12-05 19:45:44.362547] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1042 00:27:25.394 [2024-12-05 19:45:44.362558] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.394 [2024-12-05 19:45:44.362572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.394 [2024-12-05 19:45:44.362579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.394 [2024-12-05 19:45:44.362586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.394 [2024-12-05 19:45:44.362592] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.394 [2024-12-05 19:45:44.362599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.394 [2024-12-05 19:45:44.362606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.394 [2024-12-05 19:45:44.362614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.867 ms 00:27:25.394 [2024-12-05 19:45:44.362622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.394 [2024-12-05 19:45:44.375075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.394 [2024-12-05 19:45:44.375108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.394 [2024-12-05 19:45:44.375123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.438 ms 00:27:25.394 [2024-12-05 19:45:44.375149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.394 [2024-12-05 19:45:44.375499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.394 [2024-12-05 19:45:44.375513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.394 [2024-12-05 19:45:44.375521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:27:25.394 [2024-12-05 19:45:44.375528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.408193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.408245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.655 [2024-12-05 19:45:44.408256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.408265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.408326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.408333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.655 [2024-12-05 19:45:44.408341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.408349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.408408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.408417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.655 [2024-12-05 19:45:44.408429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.408436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.408451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.408459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.655 [2024-12-05 19:45:44.408466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.408473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.486923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.486984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.655 [2024-12-05 19:45:44.486995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.487003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.550902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.550956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.655 [2024-12-05 19:45:44.550969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.550978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.551046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.551055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.655 [2024-12-05 19:45:44.551063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.551077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.551112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.551121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.655 [2024-12-05 19:45:44.551144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.551152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.551237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.551247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.655 [2024-12-05 19:45:44.551255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.551262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.551292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.551300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.655 [2024-12-05 19:45:44.551307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.551315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.551347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.551356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.655 [2024-12-05 19:45:44.551364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.551370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.551412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.655 [2024-12-05 19:45:44.551423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.655 [2024-12-05 19:45:44.551430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.655 [2024-12-05 19:45:44.551438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.655 [2024-12-05 19:45:44.551549] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 603.289 ms, result 0 00:27:26.590 00:27:26.590 00:27:26.590 19:45:45 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:28.492 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:28.492 19:45:47 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:28.492 19:45:47 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:28.492 19:45:47 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 77240 00:27:28.753 19:45:47 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 77240 ']' 00:27:28.753 Process with pid 77240 is not found 00:27:28.753 Remove shared memory files 00:27:28.753 19:45:47 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 77240 00:27:28.753 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77240) - No such process 00:27:28.753 19:45:47 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 77240 is not found' 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:28.753 19:45:47 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:28.753 ************************************ 00:27:28.753 END TEST ftl_restore 00:27:28.753 ************************************ 00:27:28.753 00:27:28.753 real 3m13.206s 00:27:28.753 user 3m3.171s 00:27:28.753 sys 0m10.816s 00:27:28.753 19:45:47 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:28.753 19:45:47 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:28.753 19:45:47 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:28.753 19:45:47 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:28.753 19:45:47 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:28.753 19:45:47 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:28.753 ************************************ 00:27:28.753 START TEST ftl_dirty_shutdown 00:27:28.753 ************************************ 00:27:28.753 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:28.753 * Looking for test storage... 00:27:28.753 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:28.753 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:28.753 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:28.753 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:29.016 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.017 --rc genhtml_branch_coverage=1 00:27:29.017 --rc genhtml_function_coverage=1 00:27:29.017 --rc genhtml_legend=1 00:27:29.017 --rc geninfo_all_blocks=1 00:27:29.017 --rc geninfo_unexecuted_blocks=1 00:27:29.017 00:27:29.017 ' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.017 --rc genhtml_branch_coverage=1 00:27:29.017 --rc genhtml_function_coverage=1 00:27:29.017 --rc genhtml_legend=1 00:27:29.017 --rc geninfo_all_blocks=1 00:27:29.017 --rc geninfo_unexecuted_blocks=1 00:27:29.017 00:27:29.017 ' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.017 --rc genhtml_branch_coverage=1 00:27:29.017 --rc genhtml_function_coverage=1 00:27:29.017 --rc genhtml_legend=1 00:27:29.017 --rc geninfo_all_blocks=1 00:27:29.017 --rc geninfo_unexecuted_blocks=1 00:27:29.017 00:27:29.017 ' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:29.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.017 --rc genhtml_branch_coverage=1 00:27:29.017 --rc genhtml_function_coverage=1 00:27:29.017 --rc genhtml_legend=1 00:27:29.017 --rc geninfo_all_blocks=1 00:27:29.017 --rc geninfo_unexecuted_blocks=1 00:27:29.017 00:27:29.017 ' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=79304 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 79304 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 79304 ']' 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.017 19:45:47 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:29.017 [2024-12-05 19:45:47.864201] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:27:29.017 [2024-12-05 19:45:47.864475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79304 ] 00:27:29.276 [2024-12-05 19:45:48.028195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.276 [2024-12-05 19:45:48.130576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.843 19:45:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.843 19:45:48 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:29.843 19:45:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:29.843 19:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:29.843 19:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:29.843 19:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:29.843 19:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:29.843 19:45:48 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:30.103 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:30.103 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:30.103 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:30.103 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:30.103 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:30.103 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:30.103 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:30.103 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:30.367 { 00:27:30.367 "name": "nvme0n1", 00:27:30.367 "aliases": [ 00:27:30.367 "bf30383b-c702-4a47-9a79-66f372950c5d" 00:27:30.367 ], 00:27:30.367 "product_name": "NVMe disk", 00:27:30.367 "block_size": 4096, 00:27:30.367 "num_blocks": 1310720, 00:27:30.367 "uuid": "bf30383b-c702-4a47-9a79-66f372950c5d", 00:27:30.367 "numa_id": -1, 00:27:30.367 "assigned_rate_limits": { 00:27:30.367 "rw_ios_per_sec": 0, 00:27:30.367 "rw_mbytes_per_sec": 0, 00:27:30.367 "r_mbytes_per_sec": 0, 00:27:30.367 "w_mbytes_per_sec": 0 00:27:30.367 }, 00:27:30.367 "claimed": true, 00:27:30.367 "claim_type": "read_many_write_one", 00:27:30.367 "zoned": false, 00:27:30.367 "supported_io_types": { 00:27:30.367 "read": true, 00:27:30.367 "write": true, 00:27:30.367 "unmap": true, 00:27:30.367 "flush": true, 00:27:30.367 "reset": true, 00:27:30.367 "nvme_admin": true, 00:27:30.367 "nvme_io": true, 00:27:30.367 "nvme_io_md": false, 00:27:30.367 "write_zeroes": true, 00:27:30.367 "zcopy": false, 00:27:30.367 "get_zone_info": false, 00:27:30.367 "zone_management": false, 00:27:30.367 "zone_append": false, 00:27:30.367 "compare": true, 00:27:30.367 "compare_and_write": false, 00:27:30.367 "abort": true, 00:27:30.367 "seek_hole": false, 00:27:30.367 "seek_data": false, 00:27:30.367 "copy": true, 00:27:30.367 "nvme_iov_md": false 00:27:30.367 }, 00:27:30.367 "driver_specific": { 00:27:30.367 "nvme": [ 00:27:30.367 { 00:27:30.367 "pci_address": "0000:00:11.0", 00:27:30.367 "trid": { 00:27:30.367 "trtype": "PCIe", 00:27:30.367 "traddr": "0000:00:11.0" 00:27:30.367 }, 00:27:30.367 "ctrlr_data": { 00:27:30.367 "cntlid": 0, 00:27:30.367 "vendor_id": "0x1b36", 00:27:30.367 "model_number": "QEMU NVMe Ctrl", 00:27:30.367 "serial_number": "12341", 00:27:30.367 "firmware_revision": "8.0.0", 00:27:30.367 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:30.367 "oacs": { 00:27:30.367 "security": 0, 00:27:30.367 "format": 1, 00:27:30.367 "firmware": 0, 00:27:30.367 "ns_manage": 1 00:27:30.367 }, 00:27:30.367 "multi_ctrlr": false, 00:27:30.367 "ana_reporting": false 00:27:30.367 }, 00:27:30.367 "vs": { 00:27:30.367 "nvme_version": "1.4" 00:27:30.367 }, 00:27:30.367 "ns_data": { 00:27:30.367 "id": 1, 00:27:30.367 "can_share": false 00:27:30.367 } 00:27:30.367 } 00:27:30.367 ], 00:27:30.367 "mp_policy": "active_passive" 00:27:30.367 } 00:27:30.367 } 00:27:30.367 ]' 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:30.367 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:30.629 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=d444492f-d811-45e6-8d3b-be0a9bf513b0 00:27:30.629 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:30.629 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d444492f-d811-45e6-8d3b-be0a9bf513b0 00:27:30.892 19:45:49 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:31.153 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=55f13410-d005-4d58-9a05-cd896861b559 00:27:31.153 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 55f13410-d005-4d58-9a05-cd896861b559 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:31.415 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:31.678 { 00:27:31.678 "name": "c3f147d8-c38e-4cbd-8ed2-eadff17ce03c", 00:27:31.678 "aliases": [ 00:27:31.678 "lvs/nvme0n1p0" 00:27:31.678 ], 00:27:31.678 "product_name": "Logical Volume", 00:27:31.678 "block_size": 4096, 00:27:31.678 "num_blocks": 26476544, 00:27:31.678 "uuid": "c3f147d8-c38e-4cbd-8ed2-eadff17ce03c", 00:27:31.678 "assigned_rate_limits": { 00:27:31.678 "rw_ios_per_sec": 0, 00:27:31.678 "rw_mbytes_per_sec": 0, 00:27:31.678 "r_mbytes_per_sec": 0, 00:27:31.678 "w_mbytes_per_sec": 0 00:27:31.678 }, 00:27:31.678 "claimed": false, 00:27:31.678 "zoned": false, 00:27:31.678 "supported_io_types": { 00:27:31.678 "read": true, 00:27:31.678 "write": true, 00:27:31.678 "unmap": true, 00:27:31.678 "flush": false, 00:27:31.678 "reset": true, 00:27:31.678 "nvme_admin": false, 00:27:31.678 "nvme_io": false, 00:27:31.678 "nvme_io_md": false, 00:27:31.678 "write_zeroes": true, 00:27:31.678 "zcopy": false, 00:27:31.678 "get_zone_info": false, 00:27:31.678 "zone_management": false, 00:27:31.678 "zone_append": false, 00:27:31.678 "compare": false, 00:27:31.678 "compare_and_write": false, 00:27:31.678 "abort": false, 00:27:31.678 "seek_hole": true, 00:27:31.678 "seek_data": true, 00:27:31.678 "copy": false, 00:27:31.678 "nvme_iov_md": false 00:27:31.678 }, 00:27:31.678 "driver_specific": { 00:27:31.678 "lvol": { 00:27:31.678 "lvol_store_uuid": "55f13410-d005-4d58-9a05-cd896861b559", 00:27:31.678 "base_bdev": "nvme0n1", 00:27:31.678 "thin_provision": true, 00:27:31.678 "num_allocated_clusters": 0, 00:27:31.678 "snapshot": false, 00:27:31.678 "clone": false, 00:27:31.678 "esnap_clone": false 00:27:31.678 } 00:27:31.678 } 00:27:31.678 } 00:27:31.678 ]' 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:31.678 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:31.941 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:31.941 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:31.941 19:45:50 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:31.941 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:31.941 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:31.941 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:31.941 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:31.941 19:45:50 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:32.202 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:32.202 { 00:27:32.202 "name": "c3f147d8-c38e-4cbd-8ed2-eadff17ce03c", 00:27:32.202 "aliases": [ 00:27:32.202 "lvs/nvme0n1p0" 00:27:32.202 ], 00:27:32.202 "product_name": "Logical Volume", 00:27:32.202 "block_size": 4096, 00:27:32.202 "num_blocks": 26476544, 00:27:32.202 "uuid": "c3f147d8-c38e-4cbd-8ed2-eadff17ce03c", 00:27:32.202 "assigned_rate_limits": { 00:27:32.202 "rw_ios_per_sec": 0, 00:27:32.202 "rw_mbytes_per_sec": 0, 00:27:32.202 "r_mbytes_per_sec": 0, 00:27:32.202 "w_mbytes_per_sec": 0 00:27:32.202 }, 00:27:32.202 "claimed": false, 00:27:32.202 "zoned": false, 00:27:32.202 "supported_io_types": { 00:27:32.202 "read": true, 00:27:32.202 "write": true, 00:27:32.202 "unmap": true, 00:27:32.202 "flush": false, 00:27:32.202 "reset": true, 00:27:32.202 "nvme_admin": false, 00:27:32.202 "nvme_io": false, 00:27:32.202 "nvme_io_md": false, 00:27:32.202 "write_zeroes": true, 00:27:32.202 "zcopy": false, 00:27:32.202 "get_zone_info": false, 00:27:32.202 "zone_management": false, 00:27:32.202 "zone_append": false, 00:27:32.202 "compare": false, 00:27:32.202 "compare_and_write": false, 00:27:32.202 "abort": false, 00:27:32.202 "seek_hole": true, 00:27:32.202 "seek_data": true, 00:27:32.202 "copy": false, 00:27:32.202 "nvme_iov_md": false 00:27:32.202 }, 00:27:32.202 "driver_specific": { 00:27:32.202 "lvol": { 00:27:32.202 "lvol_store_uuid": "55f13410-d005-4d58-9a05-cd896861b559", 00:27:32.202 "base_bdev": "nvme0n1", 00:27:32.202 "thin_provision": true, 00:27:32.202 "num_allocated_clusters": 0, 00:27:32.202 "snapshot": false, 00:27:32.202 "clone": false, 00:27:32.202 "esnap_clone": false 00:27:32.202 } 00:27:32.202 } 00:27:32.202 } 00:27:32.202 ]' 00:27:32.202 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:32.202 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:32.202 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:32.202 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:32.202 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:32.202 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:32.202 19:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:32.202 19:45:51 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:32.463 19:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:32.463 19:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:32.463 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:32.463 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:32.463 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:32.463 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:32.463 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3f147d8-c38e-4cbd-8ed2-eadff17ce03c 00:27:32.724 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:32.724 { 00:27:32.724 "name": "c3f147d8-c38e-4cbd-8ed2-eadff17ce03c", 00:27:32.724 "aliases": [ 00:27:32.724 "lvs/nvme0n1p0" 00:27:32.724 ], 00:27:32.724 "product_name": "Logical Volume", 00:27:32.724 "block_size": 4096, 00:27:32.724 "num_blocks": 26476544, 00:27:32.724 "uuid": "c3f147d8-c38e-4cbd-8ed2-eadff17ce03c", 00:27:32.724 "assigned_rate_limits": { 00:27:32.724 "rw_ios_per_sec": 0, 00:27:32.724 "rw_mbytes_per_sec": 0, 00:27:32.724 "r_mbytes_per_sec": 0, 00:27:32.724 "w_mbytes_per_sec": 0 00:27:32.724 }, 00:27:32.724 "claimed": false, 00:27:32.724 "zoned": false, 00:27:32.724 "supported_io_types": { 00:27:32.724 "read": true, 00:27:32.724 "write": true, 00:27:32.724 "unmap": true, 00:27:32.724 "flush": false, 00:27:32.724 "reset": true, 00:27:32.724 "nvme_admin": false, 00:27:32.724 "nvme_io": false, 00:27:32.724 "nvme_io_md": false, 00:27:32.724 "write_zeroes": true, 00:27:32.724 "zcopy": false, 00:27:32.724 "get_zone_info": false, 00:27:32.724 "zone_management": false, 00:27:32.725 "zone_append": false, 00:27:32.725 "compare": false, 00:27:32.725 "compare_and_write": false, 00:27:32.725 "abort": false, 00:27:32.725 "seek_hole": true, 00:27:32.725 "seek_data": true, 00:27:32.725 "copy": false, 00:27:32.725 "nvme_iov_md": false 00:27:32.725 }, 00:27:32.725 "driver_specific": { 00:27:32.725 "lvol": { 00:27:32.725 "lvol_store_uuid": "55f13410-d005-4d58-9a05-cd896861b559", 00:27:32.725 "base_bdev": "nvme0n1", 00:27:32.725 "thin_provision": true, 00:27:32.725 "num_allocated_clusters": 0, 00:27:32.725 "snapshot": false, 00:27:32.725 "clone": false, 00:27:32.725 "esnap_clone": false 00:27:32.725 } 00:27:32.725 } 00:27:32.725 } 00:27:32.725 ]' 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d c3f147d8-c38e-4cbd-8ed2-eadff17ce03c --l2p_dram_limit 10' 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:32.725 19:45:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d c3f147d8-c38e-4cbd-8ed2-eadff17ce03c --l2p_dram_limit 10 -c nvc0n1p0 00:27:32.987 [2024-12-05 19:45:51.770693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.987 [2024-12-05 19:45:51.770768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:32.987 [2024-12-05 19:45:51.770788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:32.987 [2024-12-05 19:45:51.770797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.987 [2024-12-05 19:45:51.770872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.987 [2024-12-05 19:45:51.770882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.987 [2024-12-05 19:45:51.770894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:27:32.987 [2024-12-05 19:45:51.770902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.987 [2024-12-05 19:45:51.770932] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:32.987 [2024-12-05 19:45:51.771968] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:32.987 [2024-12-05 19:45:51.772028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.987 [2024-12-05 19:45:51.772038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.987 [2024-12-05 19:45:51.772051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:27:32.987 [2024-12-05 19:45:51.772059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.987 [2024-12-05 19:45:51.772169] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a6f49eaf-f4d0-4462-bb83-644057542bcc 00:27:32.987 [2024-12-05 19:45:51.773833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.987 [2024-12-05 19:45:51.773880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:32.987 [2024-12-05 19:45:51.773907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:32.987 [2024-12-05 19:45:51.773919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.987 [2024-12-05 19:45:51.782687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.987 [2024-12-05 19:45:51.782740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.987 [2024-12-05 19:45:51.782751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.717 ms 00:27:32.987 [2024-12-05 19:45:51.782761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.987 [2024-12-05 19:45:51.782863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.987 [2024-12-05 19:45:51.782875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.987 [2024-12-05 19:45:51.782885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:27:32.987 [2024-12-05 19:45:51.782900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.987 [2024-12-05 19:45:51.782962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.987 [2024-12-05 19:45:51.782974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:32.988 [2024-12-05 19:45:51.782985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:32.988 [2024-12-05 19:45:51.782995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.988 [2024-12-05 19:45:51.783017] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:32.988 [2024-12-05 19:45:51.787505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.988 [2024-12-05 19:45:51.787546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.988 [2024-12-05 19:45:51.787560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.491 ms 00:27:32.988 [2024-12-05 19:45:51.787569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.988 [2024-12-05 19:45:51.787612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.988 [2024-12-05 19:45:51.787620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:32.988 [2024-12-05 19:45:51.787630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:32.988 [2024-12-05 19:45:51.787638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.988 [2024-12-05 19:45:51.787676] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:32.988 [2024-12-05 19:45:51.787827] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:32.988 [2024-12-05 19:45:51.787845] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:32.988 [2024-12-05 19:45:51.787856] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:32.988 [2024-12-05 19:45:51.787869] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:32.988 [2024-12-05 19:45:51.787879] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:32.988 [2024-12-05 19:45:51.787889] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:32.988 [2024-12-05 19:45:51.787896] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:32.988 [2024-12-05 19:45:51.787910] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:32.988 [2024-12-05 19:45:51.787918] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:32.988 [2024-12-05 19:45:51.787929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.988 [2024-12-05 19:45:51.787945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:32.988 [2024-12-05 19:45:51.787955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:27:32.988 [2024-12-05 19:45:51.787962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.988 [2024-12-05 19:45:51.788050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.988 [2024-12-05 19:45:51.788058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:32.988 [2024-12-05 19:45:51.788068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:32.988 [2024-12-05 19:45:51.788076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.988 [2024-12-05 19:45:51.788211] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:32.988 [2024-12-05 19:45:51.788222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:32.988 [2024-12-05 19:45:51.788233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.988 [2024-12-05 19:45:51.788241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:32.988 [2024-12-05 19:45:51.788257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:32.988 [2024-12-05 19:45:51.788273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:32.988 [2024-12-05 19:45:51.788283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.988 [2024-12-05 19:45:51.788300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:32.988 [2024-12-05 19:45:51.788308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:32.988 [2024-12-05 19:45:51.788316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:32.988 [2024-12-05 19:45:51.788324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:32.988 [2024-12-05 19:45:51.788333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:32.988 [2024-12-05 19:45:51.788340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:32.988 [2024-12-05 19:45:51.788358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:32.988 [2024-12-05 19:45:51.788366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:32.988 [2024-12-05 19:45:51.788383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.988 [2024-12-05 19:45:51.788400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:32.988 [2024-12-05 19:45:51.788407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.988 [2024-12-05 19:45:51.788423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:32.988 [2024-12-05 19:45:51.788433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.988 [2024-12-05 19:45:51.788448] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:32.988 [2024-12-05 19:45:51.788455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:32.988 [2024-12-05 19:45:51.788470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:32.988 [2024-12-05 19:45:51.788481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.988 [2024-12-05 19:45:51.788496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:32.988 [2024-12-05 19:45:51.788503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:32.988 [2024-12-05 19:45:51.788513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:32.988 [2024-12-05 19:45:51.788520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:32.988 [2024-12-05 19:45:51.788529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:32.988 [2024-12-05 19:45:51.788535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:32.988 [2024-12-05 19:45:51.788551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:32.988 [2024-12-05 19:45:51.788559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788565] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:32.988 [2024-12-05 19:45:51.788575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:32.988 [2024-12-05 19:45:51.788582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:32.988 [2024-12-05 19:45:51.788592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:32.988 [2024-12-05 19:45:51.788600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:32.988 [2024-12-05 19:45:51.788610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:32.988 [2024-12-05 19:45:51.788617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:32.988 [2024-12-05 19:45:51.788626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:32.988 [2024-12-05 19:45:51.788633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:32.988 [2024-12-05 19:45:51.788642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:32.988 [2024-12-05 19:45:51.788650] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:32.988 [2024-12-05 19:45:51.788664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.988 [2024-12-05 19:45:51.788673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:32.988 [2024-12-05 19:45:51.788683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:32.988 [2024-12-05 19:45:51.788691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:32.988 [2024-12-05 19:45:51.788700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:32.988 [2024-12-05 19:45:51.788707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:32.988 [2024-12-05 19:45:51.788716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:32.988 [2024-12-05 19:45:51.788724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:32.988 [2024-12-05 19:45:51.788734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:32.988 [2024-12-05 19:45:51.788742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:32.988 [2024-12-05 19:45:51.788753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:32.988 [2024-12-05 19:45:51.788759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:32.988 [2024-12-05 19:45:51.788768] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:32.988 [2024-12-05 19:45:51.788776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:32.988 [2024-12-05 19:45:51.788785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:32.988 [2024-12-05 19:45:51.788792] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:32.989 [2024-12-05 19:45:51.788802] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:32.989 [2024-12-05 19:45:51.788810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:32.989 [2024-12-05 19:45:51.788819] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:32.989 [2024-12-05 19:45:51.788826] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:32.989 [2024-12-05 19:45:51.788836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:32.989 [2024-12-05 19:45:51.788844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.989 [2024-12-05 19:45:51.788854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:32.989 [2024-12-05 19:45:51.788861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:27:32.989 [2024-12-05 19:45:51.788870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.989 [2024-12-05 19:45:51.788909] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:32.989 [2024-12-05 19:45:51.788922] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:38.312 [2024-12-05 19:45:56.241075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.241416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:38.312 [2024-12-05 19:45:56.241669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4452.149 ms 00:27:38.312 [2024-12-05 19:45:56.241730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.273463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.273713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:38.312 [2024-12-05 19:45:56.273930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.450 ms 00:27:38.312 [2024-12-05 19:45:56.273979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.274190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.274224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:38.312 [2024-12-05 19:45:56.274313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:27:38.312 [2024-12-05 19:45:56.274351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.309940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.310182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:38.312 [2024-12-05 19:45:56.310316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.525 ms 00:27:38.312 [2024-12-05 19:45:56.310352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.310412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.310442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:38.312 [2024-12-05 19:45:56.310530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:38.312 [2024-12-05 19:45:56.310566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.311202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.311271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:38.312 [2024-12-05 19:45:56.311294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:27:38.312 [2024-12-05 19:45:56.311380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.311528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.311623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:38.312 [2024-12-05 19:45:56.311690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:27:38.312 [2024-12-05 19:45:56.311721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.328972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.329156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:38.312 [2024-12-05 19:45:56.329219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.217 ms 00:27:38.312 [2024-12-05 19:45:56.329247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.358830] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:38.312 [2024-12-05 19:45:56.362910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.363072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:38.312 [2024-12-05 19:45:56.363153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.526 ms 00:27:38.312 [2024-12-05 19:45:56.363180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.466690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.312 [2024-12-05 19:45:56.466932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:38.312 [2024-12-05 19:45:56.467005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.440 ms 00:27:38.312 [2024-12-05 19:45:56.467031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.312 [2024-12-05 19:45:56.467306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.467522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:38.313 [2024-12-05 19:45:56.467621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:27:38.313 [2024-12-05 19:45:56.467648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.579490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.579706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:38.313 [2024-12-05 19:45:56.579768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.760 ms 00:27:38.313 [2024-12-05 19:45:56.579792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.604220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.604407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:38.313 [2024-12-05 19:45:56.604479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.288 ms 00:27:38.313 [2024-12-05 19:45:56.604500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.605115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.605178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:38.313 [2024-12-05 19:45:56.605204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:27:38.313 [2024-12-05 19:45:56.605348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.686736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.686941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:38.313 [2024-12-05 19:45:56.687010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.273 ms 00:27:38.313 [2024-12-05 19:45:56.687022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.713876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.714064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:38.313 [2024-12-05 19:45:56.714088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.779 ms 00:27:38.313 [2024-12-05 19:45:56.714097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.740042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.740095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:38.313 [2024-12-05 19:45:56.740110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.877 ms 00:27:38.313 [2024-12-05 19:45:56.740118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.766786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.766848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:38.313 [2024-12-05 19:45:56.766865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.591 ms 00:27:38.313 [2024-12-05 19:45:56.766874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.766936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.766947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:38.313 [2024-12-05 19:45:56.766963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:27:38.313 [2024-12-05 19:45:56.766971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.767085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:38.313 [2024-12-05 19:45:56.767099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:38.313 [2024-12-05 19:45:56.767110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:27:38.313 [2024-12-05 19:45:56.767119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:38.313 [2024-12-05 19:45:56.768357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4997.125 ms, result 0 00:27:38.313 { 00:27:38.313 "name": "ftl0", 00:27:38.313 "uuid": "a6f49eaf-f4d0-4462-bb83-644057542bcc" 00:27:38.313 } 00:27:38.313 19:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:38.313 19:45:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:38.313 /dev/nbd0 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:38.313 1+0 records in 00:27:38.313 1+0 records out 00:27:38.313 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445216 s, 9.2 MB/s 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:38.313 19:45:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:38.575 [2024-12-05 19:45:57.354680] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:27:38.575 [2024-12-05 19:45:57.354831] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79456 ] 00:27:38.575 [2024-12-05 19:45:57.516975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.837 [2024-12-05 19:45:57.652827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.225  [2024-12-05T19:46:00.172Z] Copying: 189/1024 [MB] (189 MBps) [2024-12-05T19:46:01.232Z] Copying: 379/1024 [MB] (190 MBps) [2024-12-05T19:46:02.173Z] Copying: 569/1024 [MB] (189 MBps) [2024-12-05T19:46:03.116Z] Copying: 757/1024 [MB] (187 MBps) [2024-12-05T19:46:03.376Z] Copying: 944/1024 [MB] (187 MBps) [2024-12-05T19:46:04.321Z] Copying: 1024/1024 [MB] (average 187 MBps) 00:27:45.315 00:27:45.315 19:46:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:47.896 19:46:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:47.896 [2024-12-05 19:46:06.454668] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:27:47.896 [2024-12-05 19:46:06.454798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79550 ] 00:27:47.896 [2024-12-05 19:46:06.611933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.896 [2024-12-05 19:46:06.713157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:49.282  [2024-12-05T19:46:09.260Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-05T19:46:10.202Z] Copying: 19808/1048576 [kB] (9204 kBps) [2024-12-05T19:46:11.143Z] Copying: 30/1024 [MB] (11 MBps) [2024-12-05T19:46:12.086Z] Copying: 41/1024 [MB] (11 MBps) [2024-12-05T19:46:13.044Z] Copying: 52/1024 [MB] (10 MBps) [2024-12-05T19:46:13.985Z] Copying: 72/1024 [MB] (20 MBps) [2024-12-05T19:46:15.368Z] Copying: 102/1024 [MB] (29 MBps) [2024-12-05T19:46:15.941Z] Copying: 132/1024 [MB] (30 MBps) [2024-12-05T19:46:17.324Z] Copying: 160/1024 [MB] (28 MBps) [2024-12-05T19:46:18.261Z] Copying: 189/1024 [MB] (28 MBps) [2024-12-05T19:46:19.201Z] Copying: 217/1024 [MB] (28 MBps) [2024-12-05T19:46:20.139Z] Copying: 245/1024 [MB] (27 MBps) [2024-12-05T19:46:21.134Z] Copying: 272/1024 [MB] (27 MBps) [2024-12-05T19:46:22.076Z] Copying: 302/1024 [MB] (30 MBps) [2024-12-05T19:46:23.014Z] Copying: 332/1024 [MB] (29 MBps) [2024-12-05T19:46:24.020Z] Copying: 362/1024 [MB] (30 MBps) [2024-12-05T19:46:25.017Z] Copying: 388/1024 [MB] (25 MBps) [2024-12-05T19:46:25.955Z] Copying: 412/1024 [MB] (23 MBps) [2024-12-05T19:46:27.338Z] Copying: 437/1024 [MB] (25 MBps) [2024-12-05T19:46:27.954Z] Copying: 467/1024 [MB] (30 MBps) [2024-12-05T19:46:29.337Z] Copying: 495/1024 [MB] (28 MBps) [2024-12-05T19:46:30.279Z] Copying: 525/1024 [MB] (29 MBps) [2024-12-05T19:46:31.221Z] Copying: 553/1024 [MB] (27 MBps) [2024-12-05T19:46:32.164Z] Copying: 581/1024 [MB] (28 MBps) [2024-12-05T19:46:33.102Z] Copying: 609/1024 [MB] (28 MBps) [2024-12-05T19:46:34.044Z] Copying: 638/1024 [MB] (28 MBps) [2024-12-05T19:46:34.987Z] Copying: 665/1024 [MB] (27 MBps) [2024-12-05T19:46:36.369Z] Copying: 694/1024 [MB] (29 MBps) [2024-12-05T19:46:37.001Z] Copying: 725/1024 [MB] (31 MBps) [2024-12-05T19:46:37.941Z] Copying: 752/1024 [MB] (26 MBps) [2024-12-05T19:46:39.321Z] Copying: 782/1024 [MB] (30 MBps) [2024-12-05T19:46:40.266Z] Copying: 812/1024 [MB] (30 MBps) [2024-12-05T19:46:41.202Z] Copying: 838/1024 [MB] (25 MBps) [2024-12-05T19:46:42.141Z] Copying: 868/1024 [MB] (30 MBps) [2024-12-05T19:46:43.079Z] Copying: 900/1024 [MB] (31 MBps) [2024-12-05T19:46:44.016Z] Copying: 927/1024 [MB] (26 MBps) [2024-12-05T19:46:44.952Z] Copying: 957/1024 [MB] (29 MBps) [2024-12-05T19:46:46.333Z] Copying: 988/1024 [MB] (30 MBps) [2024-12-05T19:46:46.333Z] Copying: 1015/1024 [MB] (27 MBps) [2024-12-05T19:46:46.901Z] Copying: 1024/1024 [MB] (average 26 MBps) 00:28:27.895 00:28:27.895 19:46:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:27.895 19:46:46 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:28.154 19:46:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:28.415 [2024-12-05 19:46:47.259772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.259822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:28.415 [2024-12-05 19:46:47.259836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:28.415 [2024-12-05 19:46:47.259846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.259872] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:28.415 [2024-12-05 19:46:47.262511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.262707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:28.415 [2024-12-05 19:46:47.262728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.620 ms 00:28:28.415 [2024-12-05 19:46:47.262737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.264545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.264572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:28.415 [2024-12-05 19:46:47.264583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.776 ms 00:28:28.415 [2024-12-05 19:46:47.264591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.279311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.279342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:28.415 [2024-12-05 19:46:47.279360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.697 ms 00:28:28.415 [2024-12-05 19:46:47.279368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.285636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.285767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:28.415 [2024-12-05 19:46:47.285786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:28:28.415 [2024-12-05 19:46:47.285795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.308901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.309027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:28.415 [2024-12-05 19:46:47.309045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.030 ms 00:28:28.415 [2024-12-05 19:46:47.309053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.323696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.323814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:28.415 [2024-12-05 19:46:47.323882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.606 ms 00:28:28.415 [2024-12-05 19:46:47.323906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.324059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.324099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:28.415 [2024-12-05 19:46:47.324154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:28:28.415 [2024-12-05 19:46:47.324175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.347042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.347169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:28.415 [2024-12-05 19:46:47.347229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.835 ms 00:28:28.415 [2024-12-05 19:46:47.347251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.369937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.370043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:28.415 [2024-12-05 19:46:47.370093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.592 ms 00:28:28.415 [2024-12-05 19:46:47.370114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.392229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.392333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:28.415 [2024-12-05 19:46:47.392384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.033 ms 00:28:28.415 [2024-12-05 19:46:47.392423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.414724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.415 [2024-12-05 19:46:47.414835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:28.415 [2024-12-05 19:46:47.414886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.216 ms 00:28:28.415 [2024-12-05 19:46:47.414908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.415 [2024-12-05 19:46:47.414950] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:28.415 [2024-12-05 19:46:47.414977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.415978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:28.415 [2024-12-05 19:46:47.416559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.416587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.416667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.416703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.416732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.416760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.416816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.416845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.416945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.416974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.417945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.418002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.418063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.418115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:28.416 [2024-12-05 19:46:47.418159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:28.677 [2024-12-05 19:46:47.418352] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:28.677 [2024-12-05 19:46:47.418361] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6f49eaf-f4d0-4462-bb83-644057542bcc 00:28:28.677 [2024-12-05 19:46:47.418369] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:28.677 [2024-12-05 19:46:47.418379] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:28.677 [2024-12-05 19:46:47.418388] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:28.677 [2024-12-05 19:46:47.418397] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:28.677 [2024-12-05 19:46:47.418404] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:28.677 [2024-12-05 19:46:47.418413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:28.677 [2024-12-05 19:46:47.418420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:28.677 [2024-12-05 19:46:47.418427] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:28.677 [2024-12-05 19:46:47.418434] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:28.677 [2024-12-05 19:46:47.418442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.677 [2024-12-05 19:46:47.418450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:28.677 [2024-12-05 19:46:47.418460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.494 ms 00:28:28.677 [2024-12-05 19:46:47.418467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.677 [2024-12-05 19:46:47.430934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.677 [2024-12-05 19:46:47.431038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:28.677 [2024-12-05 19:46:47.431090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.430 ms 00:28:28.677 [2024-12-05 19:46:47.431112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.677 [2024-12-05 19:46:47.431487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.677 [2024-12-05 19:46:47.431563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:28.677 [2024-12-05 19:46:47.431618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:28:28.677 [2024-12-05 19:46:47.431640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.677 [2024-12-05 19:46:47.472741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.677 [2024-12-05 19:46:47.472885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.677 [2024-12-05 19:46:47.472943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.677 [2024-12-05 19:46:47.472965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.677 [2024-12-05 19:46:47.473040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.677 [2024-12-05 19:46:47.473062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.677 [2024-12-05 19:46:47.473118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.677 [2024-12-05 19:46:47.473160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.677 [2024-12-05 19:46:47.473266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.677 [2024-12-05 19:46:47.473336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.677 [2024-12-05 19:46:47.473358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.677 [2024-12-05 19:46:47.473410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.677 [2024-12-05 19:46:47.473448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.677 [2024-12-05 19:46:47.473468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.677 [2024-12-05 19:46:47.473489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.677 [2024-12-05 19:46:47.473507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.677 [2024-12-05 19:46:47.549842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.678 [2024-12-05 19:46:47.550010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.678 [2024-12-05 19:46:47.550066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.678 [2024-12-05 19:46:47.550089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.678 [2024-12-05 19:46:47.613406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.678 [2024-12-05 19:46:47.613578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.678 [2024-12-05 19:46:47.613635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.678 [2024-12-05 19:46:47.613658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.678 [2024-12-05 19:46:47.613748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.678 [2024-12-05 19:46:47.613804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.678 [2024-12-05 19:46:47.613833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.678 [2024-12-05 19:46:47.613868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.678 [2024-12-05 19:46:47.614059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.678 [2024-12-05 19:46:47.614146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.678 [2024-12-05 19:46:47.614198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.678 [2024-12-05 19:46:47.614221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.678 [2024-12-05 19:46:47.614330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.678 [2024-12-05 19:46:47.614448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.678 [2024-12-05 19:46:47.614474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.678 [2024-12-05 19:46:47.614495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.678 [2024-12-05 19:46:47.614546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.678 [2024-12-05 19:46:47.614629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:28.678 [2024-12-05 19:46:47.614654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.678 [2024-12-05 19:46:47.614672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.678 [2024-12-05 19:46:47.614722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.678 [2024-12-05 19:46:47.614744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.678 [2024-12-05 19:46:47.614809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.678 [2024-12-05 19:46:47.614829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.678 [2024-12-05 19:46:47.614883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.678 [2024-12-05 19:46:47.614907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.678 [2024-12-05 19:46:47.614960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.678 [2024-12-05 19:46:47.614982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.678 [2024-12-05 19:46:47.615119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.314 ms, result 0 00:28:28.678 true 00:28:28.678 19:46:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 79304 00:28:28.678 19:46:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid79304 00:28:28.678 19:46:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:28.938 [2024-12-05 19:46:47.700319] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:28:28.938 [2024-12-05 19:46:47.700547] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79981 ] 00:28:28.938 [2024-12-05 19:46:47.858091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.197 [2024-12-05 19:46:47.956262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:30.221  [2024-12-05T19:46:50.607Z] Copying: 194/1024 [MB] (194 MBps) [2024-12-05T19:46:51.544Z] Copying: 389/1024 [MB] (195 MBps) [2024-12-05T19:46:52.484Z] Copying: 585/1024 [MB] (196 MBps) [2024-12-05T19:46:53.068Z] Copying: 829/1024 [MB] (243 MBps) [2024-12-05T19:46:53.638Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:28:34.632 00:28:34.632 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 79304 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:34.633 19:46:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:34.894 [2024-12-05 19:46:53.641452] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:28:34.894 [2024-12-05 19:46:53.641789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80048 ] 00:28:34.894 [2024-12-05 19:46:53.811801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.894 [2024-12-05 19:46:53.896432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.156 [2024-12-05 19:46:54.114987] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.156 [2024-12-05 19:46:54.115044] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:35.418 [2024-12-05 19:46:54.177876] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:35.418 [2024-12-05 19:46:54.178063] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:35.418 [2024-12-05 19:46:54.178174] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:35.418 [2024-12-05 19:46:54.349984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.350029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:35.418 [2024-12-05 19:46:54.350040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:35.418 [2024-12-05 19:46:54.350048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.350086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.350094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:35.418 [2024-12-05 19:46:54.350101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:35.418 [2024-12-05 19:46:54.350108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.350122] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:35.418 [2024-12-05 19:46:54.350705] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:35.418 [2024-12-05 19:46:54.350721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.350727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:35.418 [2024-12-05 19:46:54.350735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:28:35.418 [2024-12-05 19:46:54.350740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.351799] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:35.418 [2024-12-05 19:46:54.363713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.363747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:35.418 [2024-12-05 19:46:54.363757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.915 ms 00:28:35.418 [2024-12-05 19:46:54.363764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.363815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.363823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:35.418 [2024-12-05 19:46:54.363830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:28:35.418 [2024-12-05 19:46:54.363836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.368589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.368618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:35.418 [2024-12-05 19:46:54.368627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.704 ms 00:28:35.418 [2024-12-05 19:46:54.368633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.368693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.368700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:35.418 [2024-12-05 19:46:54.368707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:28:35.418 [2024-12-05 19:46:54.368713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.368754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.368762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:35.418 [2024-12-05 19:46:54.368768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:35.418 [2024-12-05 19:46:54.368774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.368792] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:35.418 [2024-12-05 19:46:54.371739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.371764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:35.418 [2024-12-05 19:46:54.371772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.952 ms 00:28:35.418 [2024-12-05 19:46:54.371778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.371810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.371823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:35.418 [2024-12-05 19:46:54.371834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:35.418 [2024-12-05 19:46:54.371844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.371865] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:35.418 [2024-12-05 19:46:54.371881] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:35.418 [2024-12-05 19:46:54.371909] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:35.418 [2024-12-05 19:46:54.371921] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:35.418 [2024-12-05 19:46:54.372004] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:35.418 [2024-12-05 19:46:54.372012] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:35.418 [2024-12-05 19:46:54.372021] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:35.418 [2024-12-05 19:46:54.372031] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:35.418 [2024-12-05 19:46:54.372039] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:35.418 [2024-12-05 19:46:54.372045] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:35.418 [2024-12-05 19:46:54.372052] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:35.418 [2024-12-05 19:46:54.372057] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:35.418 [2024-12-05 19:46:54.372063] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:35.418 [2024-12-05 19:46:54.372070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.372080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:35.418 [2024-12-05 19:46:54.372086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:28:35.418 [2024-12-05 19:46:54.372092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.372175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.418 [2024-12-05 19:46:54.372185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:35.418 [2024-12-05 19:46:54.372191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:28:35.418 [2024-12-05 19:46:54.372197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.418 [2024-12-05 19:46:54.372281] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:35.418 [2024-12-05 19:46:54.372289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:35.418 [2024-12-05 19:46:54.372296] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.418 [2024-12-05 19:46:54.372302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.418 [2024-12-05 19:46:54.372308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:35.418 [2024-12-05 19:46:54.372314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:35.418 [2024-12-05 19:46:54.372319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:35.418 [2024-12-05 19:46:54.372325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:35.418 [2024-12-05 19:46:54.372331] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:35.418 [2024-12-05 19:46:54.372341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.418 [2024-12-05 19:46:54.372346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:35.418 [2024-12-05 19:46:54.372352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:35.418 [2024-12-05 19:46:54.372357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:35.418 [2024-12-05 19:46:54.372362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:35.418 [2024-12-05 19:46:54.372367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:35.418 [2024-12-05 19:46:54.372372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.418 [2024-12-05 19:46:54.372378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:35.418 [2024-12-05 19:46:54.372383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:35.418 [2024-12-05 19:46:54.372388] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.418 [2024-12-05 19:46:54.372394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:35.418 [2024-12-05 19:46:54.372400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:35.418 [2024-12-05 19:46:54.372406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.418 [2024-12-05 19:46:54.372412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:35.418 [2024-12-05 19:46:54.372417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:35.418 [2024-12-05 19:46:54.372422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.418 [2024-12-05 19:46:54.372427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:35.419 [2024-12-05 19:46:54.372432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:35.419 [2024-12-05 19:46:54.372437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.419 [2024-12-05 19:46:54.372443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:35.419 [2024-12-05 19:46:54.372449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:35.419 [2024-12-05 19:46:54.372454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:35.419 [2024-12-05 19:46:54.372459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:35.419 [2024-12-05 19:46:54.372464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:35.419 [2024-12-05 19:46:54.372469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.419 [2024-12-05 19:46:54.372474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:35.419 [2024-12-05 19:46:54.372483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:35.419 [2024-12-05 19:46:54.372491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:35.419 [2024-12-05 19:46:54.372499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:35.419 [2024-12-05 19:46:54.372507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:35.419 [2024-12-05 19:46:54.372515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.419 [2024-12-05 19:46:54.372522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:35.419 [2024-12-05 19:46:54.372531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:35.419 [2024-12-05 19:46:54.372539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.419 [2024-12-05 19:46:54.372548] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:35.419 [2024-12-05 19:46:54.372557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:35.419 [2024-12-05 19:46:54.372568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:35.419 [2024-12-05 19:46:54.372578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:35.419 [2024-12-05 19:46:54.372587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:35.419 [2024-12-05 19:46:54.372595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:35.419 [2024-12-05 19:46:54.372605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:35.419 [2024-12-05 19:46:54.372610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:35.419 [2024-12-05 19:46:54.372616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:35.419 [2024-12-05 19:46:54.372622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:35.419 [2024-12-05 19:46:54.372629] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:35.419 [2024-12-05 19:46:54.372637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.419 [2024-12-05 19:46:54.372644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:35.419 [2024-12-05 19:46:54.372650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:35.419 [2024-12-05 19:46:54.372655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:35.419 [2024-12-05 19:46:54.372661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:35.419 [2024-12-05 19:46:54.372667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:35.419 [2024-12-05 19:46:54.372673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:35.419 [2024-12-05 19:46:54.372679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:35.419 [2024-12-05 19:46:54.372684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:35.419 [2024-12-05 19:46:54.372690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:35.419 [2024-12-05 19:46:54.372696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:35.419 [2024-12-05 19:46:54.372701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:35.419 [2024-12-05 19:46:54.372707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:35.419 [2024-12-05 19:46:54.372713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:35.419 [2024-12-05 19:46:54.372719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:35.419 [2024-12-05 19:46:54.372725] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:35.419 [2024-12-05 19:46:54.372731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.419 [2024-12-05 19:46:54.372737] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:35.419 [2024-12-05 19:46:54.372743] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:35.419 [2024-12-05 19:46:54.372749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:35.419 [2024-12-05 19:46:54.372755] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:35.419 [2024-12-05 19:46:54.372761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.419 [2024-12-05 19:46:54.372766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:35.419 [2024-12-05 19:46:54.372773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:28:35.419 [2024-12-05 19:46:54.372778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.419 [2024-12-05 19:46:54.395088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.419 [2024-12-05 19:46:54.395141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:35.419 [2024-12-05 19:46:54.395151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.245 ms 00:28:35.419 [2024-12-05 19:46:54.395159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.419 [2024-12-05 19:46:54.395241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.419 [2024-12-05 19:46:54.395248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:35.419 [2024-12-05 19:46:54.395255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:35.419 [2024-12-05 19:46:54.395261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.436397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.436614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:35.680 [2024-12-05 19:46:54.436637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.079 ms 00:28:35.680 [2024-12-05 19:46:54.436646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.436706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.436716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:35.680 [2024-12-05 19:46:54.436726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:35.680 [2024-12-05 19:46:54.436734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.437113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.437151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:35.680 [2024-12-05 19:46:54.437161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.316 ms 00:28:35.680 [2024-12-05 19:46:54.437173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.437310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.437319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:35.680 [2024-12-05 19:46:54.437328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:28:35.680 [2024-12-05 19:46:54.437335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.449121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.449162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:35.680 [2024-12-05 19:46:54.449173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.766 ms 00:28:35.680 [2024-12-05 19:46:54.449181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.461212] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:35.680 [2024-12-05 19:46:54.461248] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:35.680 [2024-12-05 19:46:54.461260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.461269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:35.680 [2024-12-05 19:46:54.461280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.949 ms 00:28:35.680 [2024-12-05 19:46:54.461287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.483638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.483811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:35.680 [2024-12-05 19:46:54.483870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.304 ms 00:28:35.680 [2024-12-05 19:46:54.483893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.493624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.493749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:35.680 [2024-12-05 19:46:54.493791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.666 ms 00:28:35.680 [2024-12-05 19:46:54.493809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.503779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.503890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:35.680 [2024-12-05 19:46:54.503934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.918 ms 00:28:35.680 [2024-12-05 19:46:54.503951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.504465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.504541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:35.680 [2024-12-05 19:46:54.504581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:28:35.680 [2024-12-05 19:46:54.504599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.553208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.553416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:35.680 [2024-12-05 19:46:54.553471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.582 ms 00:28:35.680 [2024-12-05 19:46:54.553491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.562686] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:35.680 [2024-12-05 19:46:54.565213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.565301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:35.680 [2024-12-05 19:46:54.565315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.674 ms 00:28:35.680 [2024-12-05 19:46:54.565327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.565425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.565434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:35.680 [2024-12-05 19:46:54.565442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:28:35.680 [2024-12-05 19:46:54.565448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.565503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.565512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:35.680 [2024-12-05 19:46:54.565519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:28:35.680 [2024-12-05 19:46:54.565525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.565543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.565550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:35.680 [2024-12-05 19:46:54.565556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:35.680 [2024-12-05 19:46:54.565562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.565586] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:35.680 [2024-12-05 19:46:54.565594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.565600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:35.680 [2024-12-05 19:46:54.565607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:35.680 [2024-12-05 19:46:54.565615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.588643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.588830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:35.680 [2024-12-05 19:46:54.588883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.012 ms 00:28:35.680 [2024-12-05 19:46:54.588907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.589012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:35.680 [2024-12-05 19:46:54.589037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:35.680 [2024-12-05 19:46:54.589060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:35.680 [2024-12-05 19:46:54.589161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:35.680 [2024-12-05 19:46:54.590309] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 239.855 ms, result 0 00:28:36.630  [2024-12-05T19:46:57.019Z] Copying: 45/1024 [MB] (45 MBps) [2024-12-05T19:46:57.606Z] Copying: 89/1024 [MB] (43 MBps) [2024-12-05T19:46:58.988Z] Copying: 131/1024 [MB] (42 MBps) [2024-12-05T19:46:59.930Z] Copying: 160/1024 [MB] (28 MBps) [2024-12-05T19:47:00.884Z] Copying: 199/1024 [MB] (39 MBps) [2024-12-05T19:47:01.924Z] Copying: 245/1024 [MB] (45 MBps) [2024-12-05T19:47:02.869Z] Copying: 289/1024 [MB] (43 MBps) [2024-12-05T19:47:03.811Z] Copying: 333/1024 [MB] (44 MBps) [2024-12-05T19:47:04.750Z] Copying: 379/1024 [MB] (45 MBps) [2024-12-05T19:47:05.701Z] Copying: 425/1024 [MB] (45 MBps) [2024-12-05T19:47:06.646Z] Copying: 471/1024 [MB] (46 MBps) [2024-12-05T19:47:08.035Z] Copying: 514/1024 [MB] (43 MBps) [2024-12-05T19:47:08.606Z] Copying: 563/1024 [MB] (48 MBps) [2024-12-05T19:47:09.993Z] Copying: 600/1024 [MB] (37 MBps) [2024-12-05T19:47:10.954Z] Copying: 646/1024 [MB] (45 MBps) [2024-12-05T19:47:11.898Z] Copying: 691/1024 [MB] (45 MBps) [2024-12-05T19:47:12.840Z] Copying: 735/1024 [MB] (44 MBps) [2024-12-05T19:47:13.785Z] Copying: 779/1024 [MB] (43 MBps) [2024-12-05T19:47:14.819Z] Copying: 824/1024 [MB] (44 MBps) [2024-12-05T19:47:15.757Z] Copying: 868/1024 [MB] (44 MBps) [2024-12-05T19:47:16.697Z] Copying: 916/1024 [MB] (47 MBps) [2024-12-05T19:47:17.700Z] Copying: 959/1024 [MB] (43 MBps) [2024-12-05T19:47:18.641Z] Copying: 995/1024 [MB] (35 MBps) [2024-12-05T19:47:19.585Z] Copying: 1023/1024 [MB] (27 MBps) [2024-12-05T19:47:19.585Z] Copying: 1024/1024 [MB] (average 41 MBps)[2024-12-05 19:47:19.476672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.579 [2024-12-05 19:47:19.476837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:00.579 [2024-12-05 19:47:19.476861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:00.579 [2024-12-05 19:47:19.476871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.579 [2024-12-05 19:47:19.476923] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:00.579 [2024-12-05 19:47:19.479665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.579 [2024-12-05 19:47:19.479702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:00.579 [2024-12-05 19:47:19.479713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.725 ms 00:29:00.579 [2024-12-05 19:47:19.479727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.579 [2024-12-05 19:47:19.490179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.579 [2024-12-05 19:47:19.490321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:00.579 [2024-12-05 19:47:19.490379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.452 ms 00:29:00.579 [2024-12-05 19:47:19.490402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.579 [2024-12-05 19:47:19.510943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.579 [2024-12-05 19:47:19.511094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:00.579 [2024-12-05 19:47:19.511172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.509 ms 00:29:00.579 [2024-12-05 19:47:19.511197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.579 [2024-12-05 19:47:19.517454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.580 [2024-12-05 19:47:19.517486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:00.580 [2024-12-05 19:47:19.517497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.117 ms 00:29:00.580 [2024-12-05 19:47:19.517505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.580 [2024-12-05 19:47:19.541709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.580 [2024-12-05 19:47:19.541744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:00.580 [2024-12-05 19:47:19.541756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.155 ms 00:29:00.580 [2024-12-05 19:47:19.541765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:00.580 [2024-12-05 19:47:19.556739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:00.580 [2024-12-05 19:47:19.556778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:00.580 [2024-12-05 19:47:19.556790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.939 ms 00:29:00.580 [2024-12-05 19:47:19.556800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.151 [2024-12-05 19:47:19.890692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.151 [2024-12-05 19:47:19.890791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:01.151 [2024-12-05 19:47:19.890822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 333.844 ms 00:29:01.151 [2024-12-05 19:47:19.890831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.151 [2024-12-05 19:47:19.917822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.151 [2024-12-05 19:47:19.917899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:01.151 [2024-12-05 19:47:19.917915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.972 ms 00:29:01.151 [2024-12-05 19:47:19.917939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.151 [2024-12-05 19:47:19.942885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.151 [2024-12-05 19:47:19.943086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:01.151 [2024-12-05 19:47:19.943107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.895 ms 00:29:01.151 [2024-12-05 19:47:19.943116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.151 [2024-12-05 19:47:19.967492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.151 [2024-12-05 19:47:19.967539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:01.151 [2024-12-05 19:47:19.967552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.282 ms 00:29:01.151 [2024-12-05 19:47:19.967561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.151 [2024-12-05 19:47:19.991750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.151 [2024-12-05 19:47:19.991808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:01.151 [2024-12-05 19:47:19.991822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.112 ms 00:29:01.151 [2024-12-05 19:47:19.991831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.151 [2024-12-05 19:47:19.991881] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:01.151 [2024-12-05 19:47:19.991896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 109568 / 261120 wr_cnt: 1 state: open 00:29:01.152 [2024-12-05 19:47:19.991908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.991993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:01.152 [2024-12-05 19:47:19.992420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:01.153 [2024-12-05 19:47:19.992698] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:01.153 [2024-12-05 19:47:19.992706] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6f49eaf-f4d0-4462-bb83-644057542bcc 00:29:01.153 [2024-12-05 19:47:19.992727] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 109568 00:29:01.153 [2024-12-05 19:47:19.992735] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 110528 00:29:01.153 [2024-12-05 19:47:19.992742] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 109568 00:29:01.153 [2024-12-05 19:47:19.992751] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0088 00:29:01.153 [2024-12-05 19:47:19.992758] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:01.153 [2024-12-05 19:47:19.992766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:01.153 [2024-12-05 19:47:19.992774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:01.153 [2024-12-05 19:47:19.992780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:01.153 [2024-12-05 19:47:19.992787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:01.153 [2024-12-05 19:47:19.992795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.153 [2024-12-05 19:47:19.992803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:01.153 [2024-12-05 19:47:19.992811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:29:01.153 [2024-12-05 19:47:19.992818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.153 [2024-12-05 19:47:20.005861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.153 [2024-12-05 19:47:20.006052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:01.153 [2024-12-05 19:47:20.006072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.022 ms 00:29:01.153 [2024-12-05 19:47:20.006080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.153 [2024-12-05 19:47:20.006492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:01.153 [2024-12-05 19:47:20.006510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:01.153 [2024-12-05 19:47:20.006526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:29:01.153 [2024-12-05 19:47:20.006534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.153 [2024-12-05 19:47:20.041141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.153 [2024-12-05 19:47:20.041194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:01.153 [2024-12-05 19:47:20.041207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.153 [2024-12-05 19:47:20.041216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.153 [2024-12-05 19:47:20.041287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.153 [2024-12-05 19:47:20.041295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:01.153 [2024-12-05 19:47:20.041310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.153 [2024-12-05 19:47:20.041317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.153 [2024-12-05 19:47:20.041384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.153 [2024-12-05 19:47:20.041394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:01.153 [2024-12-05 19:47:20.041402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.153 [2024-12-05 19:47:20.041410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.153 [2024-12-05 19:47:20.041426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.153 [2024-12-05 19:47:20.041434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:01.153 [2024-12-05 19:47:20.041442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.153 [2024-12-05 19:47:20.041449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.153 [2024-12-05 19:47:20.123764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.153 [2024-12-05 19:47:20.123829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:01.154 [2024-12-05 19:47:20.123845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.154 [2024-12-05 19:47:20.123854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.414 [2024-12-05 19:47:20.190518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.414 [2024-12-05 19:47:20.190580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:01.414 [2024-12-05 19:47:20.190593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.414 [2024-12-05 19:47:20.190607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.414 [2024-12-05 19:47:20.190685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.414 [2024-12-05 19:47:20.190695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:01.414 [2024-12-05 19:47:20.190703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.414 [2024-12-05 19:47:20.190711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.414 [2024-12-05 19:47:20.190748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.414 [2024-12-05 19:47:20.190757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:01.414 [2024-12-05 19:47:20.190765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.414 [2024-12-05 19:47:20.190773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.414 [2024-12-05 19:47:20.190867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.414 [2024-12-05 19:47:20.190877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:01.414 [2024-12-05 19:47:20.190886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.414 [2024-12-05 19:47:20.190893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.414 [2024-12-05 19:47:20.190922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.414 [2024-12-05 19:47:20.190930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:01.414 [2024-12-05 19:47:20.190939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.414 [2024-12-05 19:47:20.190946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.414 [2024-12-05 19:47:20.190987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.414 [2024-12-05 19:47:20.190996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:01.414 [2024-12-05 19:47:20.191005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.414 [2024-12-05 19:47:20.191013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.414 [2024-12-05 19:47:20.191057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:01.414 [2024-12-05 19:47:20.191067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:01.414 [2024-12-05 19:47:20.191076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:01.414 [2024-12-05 19:47:20.191083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:01.414 [2024-12-05 19:47:20.191244] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 714.629 ms, result 0 00:29:03.325 00:29:03.325 00:29:03.325 19:47:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:05.309 19:47:24 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:05.568 [2024-12-05 19:47:24.363514] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:29:05.568 [2024-12-05 19:47:24.363787] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80352 ] 00:29:05.568 [2024-12-05 19:47:24.521142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.828 [2024-12-05 19:47:24.619206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.091 [2024-12-05 19:47:24.875704] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:06.091 [2024-12-05 19:47:24.875771] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:06.091 [2024-12-05 19:47:25.028400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.028451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:06.091 [2024-12-05 19:47:25.028464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:06.091 [2024-12-05 19:47:25.028472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.091 [2024-12-05 19:47:25.028519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.028531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:06.091 [2024-12-05 19:47:25.028540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:06.091 [2024-12-05 19:47:25.028547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.091 [2024-12-05 19:47:25.028566] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:06.091 [2024-12-05 19:47:25.029250] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:06.091 [2024-12-05 19:47:25.029266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.029274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:06.091 [2024-12-05 19:47:25.029283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:29:06.091 [2024-12-05 19:47:25.029289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.091 [2024-12-05 19:47:25.030350] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:06.091 [2024-12-05 19:47:25.042444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.042479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:06.091 [2024-12-05 19:47:25.042490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.096 ms 00:29:06.091 [2024-12-05 19:47:25.042499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.091 [2024-12-05 19:47:25.042555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.042564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:06.091 [2024-12-05 19:47:25.042572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:29:06.091 [2024-12-05 19:47:25.042579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.091 [2024-12-05 19:47:25.047380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.047410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:06.091 [2024-12-05 19:47:25.047419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.743 ms 00:29:06.091 [2024-12-05 19:47:25.047431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.091 [2024-12-05 19:47:25.047502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.047511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:06.091 [2024-12-05 19:47:25.047519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:06.091 [2024-12-05 19:47:25.047526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.091 [2024-12-05 19:47:25.047564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.047573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:06.091 [2024-12-05 19:47:25.047580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:06.091 [2024-12-05 19:47:25.047588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.091 [2024-12-05 19:47:25.047611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:06.091 [2024-12-05 19:47:25.050960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.050985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:06.091 [2024-12-05 19:47:25.050996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.354 ms 00:29:06.091 [2024-12-05 19:47:25.051004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.091 [2024-12-05 19:47:25.051033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.091 [2024-12-05 19:47:25.051041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:06.091 [2024-12-05 19:47:25.051049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:06.092 [2024-12-05 19:47:25.051056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.092 [2024-12-05 19:47:25.051075] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:06.092 [2024-12-05 19:47:25.051093] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:06.092 [2024-12-05 19:47:25.051142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:06.092 [2024-12-05 19:47:25.051160] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:06.092 [2024-12-05 19:47:25.051263] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:06.092 [2024-12-05 19:47:25.051273] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:06.092 [2024-12-05 19:47:25.051283] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:06.092 [2024-12-05 19:47:25.051293] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051302] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051310] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:06.092 [2024-12-05 19:47:25.051318] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:06.092 [2024-12-05 19:47:25.051327] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:06.092 [2024-12-05 19:47:25.051334] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:06.092 [2024-12-05 19:47:25.051342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.092 [2024-12-05 19:47:25.051349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:06.092 [2024-12-05 19:47:25.051357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:29:06.092 [2024-12-05 19:47:25.051364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.092 [2024-12-05 19:47:25.051445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.092 [2024-12-05 19:47:25.051454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:06.092 [2024-12-05 19:47:25.051461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:29:06.092 [2024-12-05 19:47:25.051468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.092 [2024-12-05 19:47:25.051580] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:06.092 [2024-12-05 19:47:25.051590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:06.092 [2024-12-05 19:47:25.051598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:06.092 [2024-12-05 19:47:25.051619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:06.092 [2024-12-05 19:47:25.051639] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051645] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.092 [2024-12-05 19:47:25.051652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:06.092 [2024-12-05 19:47:25.051659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:06.092 [2024-12-05 19:47:25.051665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:06.092 [2024-12-05 19:47:25.051676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:06.092 [2024-12-05 19:47:25.051683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:06.092 [2024-12-05 19:47:25.051690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:06.092 [2024-12-05 19:47:25.051704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:06.092 [2024-12-05 19:47:25.051737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:06.092 [2024-12-05 19:47:25.051760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:06.092 [2024-12-05 19:47:25.051780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:06.092 [2024-12-05 19:47:25.051799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:06.092 [2024-12-05 19:47:25.051819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.092 [2024-12-05 19:47:25.051831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:06.092 [2024-12-05 19:47:25.051837] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:06.092 [2024-12-05 19:47:25.051844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:06.092 [2024-12-05 19:47:25.051850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:06.092 [2024-12-05 19:47:25.051857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:06.092 [2024-12-05 19:47:25.051864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:06.092 [2024-12-05 19:47:25.051877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:06.092 [2024-12-05 19:47:25.051883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051890] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:06.092 [2024-12-05 19:47:25.051897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:06.092 [2024-12-05 19:47:25.051904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:06.092 [2024-12-05 19:47:25.051926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:06.092 [2024-12-05 19:47:25.051937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:06.092 [2024-12-05 19:47:25.051950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:06.092 [2024-12-05 19:47:25.051961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:06.092 [2024-12-05 19:47:25.051973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:06.092 [2024-12-05 19:47:25.051980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:06.092 [2024-12-05 19:47:25.051988] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:06.092 [2024-12-05 19:47:25.051997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.092 [2024-12-05 19:47:25.052009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:06.092 [2024-12-05 19:47:25.052017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:06.092 [2024-12-05 19:47:25.052024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:06.092 [2024-12-05 19:47:25.052031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:06.093 [2024-12-05 19:47:25.052039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:06.093 [2024-12-05 19:47:25.052045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:06.093 [2024-12-05 19:47:25.052052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:06.093 [2024-12-05 19:47:25.052059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:06.093 [2024-12-05 19:47:25.052066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:06.093 [2024-12-05 19:47:25.052073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:06.093 [2024-12-05 19:47:25.052080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:06.093 [2024-12-05 19:47:25.052087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:06.093 [2024-12-05 19:47:25.052093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:06.093 [2024-12-05 19:47:25.052100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:06.093 [2024-12-05 19:47:25.052107] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:06.093 [2024-12-05 19:47:25.052115] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:06.093 [2024-12-05 19:47:25.052122] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:06.093 [2024-12-05 19:47:25.052143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:06.093 [2024-12-05 19:47:25.052150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:06.093 [2024-12-05 19:47:25.052157] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:06.093 [2024-12-05 19:47:25.052164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.093 [2024-12-05 19:47:25.052172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:06.093 [2024-12-05 19:47:25.052180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:29:06.093 [2024-12-05 19:47:25.052187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.093 [2024-12-05 19:47:25.077641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.093 [2024-12-05 19:47:25.077674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:06.093 [2024-12-05 19:47:25.077686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.406 ms 00:29:06.093 [2024-12-05 19:47:25.077696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.093 [2024-12-05 19:47:25.077783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.093 [2024-12-05 19:47:25.077796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:06.093 [2024-12-05 19:47:25.077808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:29:06.093 [2024-12-05 19:47:25.077818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.124798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.124838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:06.354 [2024-12-05 19:47:25.124851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.914 ms 00:29:06.354 [2024-12-05 19:47:25.124859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.124904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.124915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:06.354 [2024-12-05 19:47:25.124926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:06.354 [2024-12-05 19:47:25.124934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.125306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.125323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:06.354 [2024-12-05 19:47:25.125331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:29:06.354 [2024-12-05 19:47:25.125339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.125460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.125474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:06.354 [2024-12-05 19:47:25.125486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:29:06.354 [2024-12-05 19:47:25.125493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.138294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.138325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:06.354 [2024-12-05 19:47:25.138334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.784 ms 00:29:06.354 [2024-12-05 19:47:25.138342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.150308] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:06.354 [2024-12-05 19:47:25.150341] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:06.354 [2024-12-05 19:47:25.150353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.150362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:06.354 [2024-12-05 19:47:25.150371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.920 ms 00:29:06.354 [2024-12-05 19:47:25.150379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.174479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.174514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:06.354 [2024-12-05 19:47:25.174527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.062 ms 00:29:06.354 [2024-12-05 19:47:25.174536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.186114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.186152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:06.354 [2024-12-05 19:47:25.186162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.555 ms 00:29:06.354 [2024-12-05 19:47:25.186169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.197291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.197412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:06.354 [2024-12-05 19:47:25.197427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.092 ms 00:29:06.354 [2024-12-05 19:47:25.197435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.198050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.198072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:06.354 [2024-12-05 19:47:25.198084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:29:06.354 [2024-12-05 19:47:25.198091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.254172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.254225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:06.354 [2024-12-05 19:47:25.254244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.062 ms 00:29:06.354 [2024-12-05 19:47:25.254252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.264309] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:06.354 [2024-12-05 19:47:25.266731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.266759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:06.354 [2024-12-05 19:47:25.266772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.428 ms 00:29:06.354 [2024-12-05 19:47:25.266782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.266877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.266888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:06.354 [2024-12-05 19:47:25.266899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:06.354 [2024-12-05 19:47:25.266906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.268190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.268218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:06.354 [2024-12-05 19:47:25.268228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.246 ms 00:29:06.354 [2024-12-05 19:47:25.268235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.268257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.268265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:06.354 [2024-12-05 19:47:25.268273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:06.354 [2024-12-05 19:47:25.268280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.268314] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:06.354 [2024-12-05 19:47:25.268323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.268331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:06.354 [2024-12-05 19:47:25.268338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:06.354 [2024-12-05 19:47:25.268345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.291097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.291145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:06.354 [2024-12-05 19:47:25.291160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.735 ms 00:29:06.354 [2024-12-05 19:47:25.291169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.291242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:06.354 [2024-12-05 19:47:25.291251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:06.354 [2024-12-05 19:47:25.291260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:29:06.354 [2024-12-05 19:47:25.291267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:06.354 [2024-12-05 19:47:25.292146] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 263.331 ms, result 0 00:29:07.740  [2024-12-05T19:47:27.692Z] Copying: 956/1048576 [kB] (956 kBps) [2024-12-05T19:47:28.637Z] Copying: 4168/1048576 [kB] (3212 kBps) [2024-12-05T19:47:29.581Z] Copying: 38/1024 [MB] (34 MBps) [2024-12-05T19:47:30.524Z] Copying: 91/1024 [MB] (52 MBps) [2024-12-05T19:47:31.906Z] Copying: 140/1024 [MB] (49 MBps) [2024-12-05T19:47:32.474Z] Copying: 194/1024 [MB] (53 MBps) [2024-12-05T19:47:33.859Z] Copying: 247/1024 [MB] (52 MBps) [2024-12-05T19:47:34.796Z] Copying: 300/1024 [MB] (52 MBps) [2024-12-05T19:47:35.731Z] Copying: 345/1024 [MB] (45 MBps) [2024-12-05T19:47:36.667Z] Copying: 395/1024 [MB] (50 MBps) [2024-12-05T19:47:37.603Z] Copying: 447/1024 [MB] (51 MBps) [2024-12-05T19:47:38.534Z] Copying: 500/1024 [MB] (53 MBps) [2024-12-05T19:47:39.472Z] Copying: 551/1024 [MB] (50 MBps) [2024-12-05T19:47:40.856Z] Copying: 604/1024 [MB] (53 MBps) [2024-12-05T19:47:41.832Z] Copying: 656/1024 [MB] (52 MBps) [2024-12-05T19:47:42.772Z] Copying: 707/1024 [MB] (50 MBps) [2024-12-05T19:47:43.715Z] Copying: 758/1024 [MB] (51 MBps) [2024-12-05T19:47:44.660Z] Copying: 812/1024 [MB] (53 MBps) [2024-12-05T19:47:45.601Z] Copying: 862/1024 [MB] (49 MBps) [2024-12-05T19:47:46.541Z] Copying: 915/1024 [MB] (53 MBps) [2024-12-05T19:47:47.480Z] Copying: 962/1024 [MB] (46 MBps) [2024-12-05T19:47:47.740Z] Copying: 1015/1024 [MB] (52 MBps) [2024-12-05T19:47:48.315Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-12-05 19:47:48.063911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.063989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:29.309 [2024-12-05 19:47:48.064008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:29.309 [2024-12-05 19:47:48.064020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.064050] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:29.309 [2024-12-05 19:47:48.067552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.067587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:29.309 [2024-12-05 19:47:48.067597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.482 ms 00:29:29.309 [2024-12-05 19:47:48.067605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.067821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.067835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:29.309 [2024-12-05 19:47:48.067844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:29:29.309 [2024-12-05 19:47:48.067851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.077443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.077480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:29.309 [2024-12-05 19:47:48.077491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.576 ms 00:29:29.309 [2024-12-05 19:47:48.077499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.084004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.084038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:29.309 [2024-12-05 19:47:48.084055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.479 ms 00:29:29.309 [2024-12-05 19:47:48.084063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.108218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.108413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:29.309 [2024-12-05 19:47:48.108432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.112 ms 00:29:29.309 [2024-12-05 19:47:48.108440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.122071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.122121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:29.309 [2024-12-05 19:47:48.122149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.589 ms 00:29:29.309 [2024-12-05 19:47:48.122157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.123971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.124113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:29.309 [2024-12-05 19:47:48.124151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.780 ms 00:29:29.309 [2024-12-05 19:47:48.124167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.147299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.147421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:29.309 [2024-12-05 19:47:48.147437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.114 ms 00:29:29.309 [2024-12-05 19:47:48.147445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.169528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.169557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:29.309 [2024-12-05 19:47:48.169567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.054 ms 00:29:29.309 [2024-12-05 19:47:48.169574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.191601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.191630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:29.309 [2024-12-05 19:47:48.191640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.997 ms 00:29:29.309 [2024-12-05 19:47:48.191647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.213556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.309 [2024-12-05 19:47:48.213584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:29.309 [2024-12-05 19:47:48.213594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.859 ms 00:29:29.309 [2024-12-05 19:47:48.213601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.309 [2024-12-05 19:47:48.213630] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:29.309 [2024-12-05 19:47:48.213643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:29.309 [2024-12-05 19:47:48.213653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:29.309 [2024-12-05 19:47:48.213661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:29.309 [2024-12-05 19:47:48.213894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.213998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:29.310 [2024-12-05 19:47:48.214439] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:29.310 [2024-12-05 19:47:48.214446] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6f49eaf-f4d0-4462-bb83-644057542bcc 00:29:29.310 [2024-12-05 19:47:48.214454] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:29.310 [2024-12-05 19:47:48.214461] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 155072 00:29:29.310 [2024-12-05 19:47:48.214471] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 153088 00:29:29.310 [2024-12-05 19:47:48.214479] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0130 00:29:29.310 [2024-12-05 19:47:48.214486] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:29.310 [2024-12-05 19:47:48.214499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:29.310 [2024-12-05 19:47:48.214506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:29.310 [2024-12-05 19:47:48.214512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:29.310 [2024-12-05 19:47:48.214519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:29.310 [2024-12-05 19:47:48.214526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.310 [2024-12-05 19:47:48.214534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:29.310 [2024-12-05 19:47:48.214542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.897 ms 00:29:29.310 [2024-12-05 19:47:48.214549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.310 [2024-12-05 19:47:48.226719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.310 [2024-12-05 19:47:48.226824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:29.310 [2024-12-05 19:47:48.226838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.155 ms 00:29:29.310 [2024-12-05 19:47:48.226845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.310 [2024-12-05 19:47:48.227197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.310 [2024-12-05 19:47:48.227207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:29.310 [2024-12-05 19:47:48.227216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:29:29.310 [2024-12-05 19:47:48.227223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.310 [2024-12-05 19:47:48.259350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.310 [2024-12-05 19:47:48.259380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:29.310 [2024-12-05 19:47:48.259390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.310 [2024-12-05 19:47:48.259403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.311 [2024-12-05 19:47:48.259451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.311 [2024-12-05 19:47:48.259459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:29.311 [2024-12-05 19:47:48.259467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.311 [2024-12-05 19:47:48.259474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.311 [2024-12-05 19:47:48.259523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.311 [2024-12-05 19:47:48.259532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:29.311 [2024-12-05 19:47:48.259539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.311 [2024-12-05 19:47:48.259546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.311 [2024-12-05 19:47:48.259560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.311 [2024-12-05 19:47:48.259568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:29.311 [2024-12-05 19:47:48.259575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.311 [2024-12-05 19:47:48.259582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.574 [2024-12-05 19:47:48.336246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.574 [2024-12-05 19:47:48.336406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:29.574 [2024-12-05 19:47:48.336425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.574 [2024-12-05 19:47:48.336433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.574 [2024-12-05 19:47:48.398709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.574 [2024-12-05 19:47:48.398749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:29.574 [2024-12-05 19:47:48.398761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.574 [2024-12-05 19:47:48.398769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.574 [2024-12-05 19:47:48.398843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.574 [2024-12-05 19:47:48.398855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:29.574 [2024-12-05 19:47:48.398864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.574 [2024-12-05 19:47:48.398871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.574 [2024-12-05 19:47:48.398903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.574 [2024-12-05 19:47:48.398912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:29.574 [2024-12-05 19:47:48.398920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.574 [2024-12-05 19:47:48.398927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.574 [2024-12-05 19:47:48.399007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.574 [2024-12-05 19:47:48.399017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:29.574 [2024-12-05 19:47:48.399027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.574 [2024-12-05 19:47:48.399034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.574 [2024-12-05 19:47:48.399061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.574 [2024-12-05 19:47:48.399070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:29.574 [2024-12-05 19:47:48.399078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.574 [2024-12-05 19:47:48.399085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.574 [2024-12-05 19:47:48.399116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.574 [2024-12-05 19:47:48.399125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:29.574 [2024-12-05 19:47:48.399159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.574 [2024-12-05 19:47:48.399167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.574 [2024-12-05 19:47:48.399208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.574 [2024-12-05 19:47:48.399218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:29.574 [2024-12-05 19:47:48.399225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.574 [2024-12-05 19:47:48.399233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.574 [2024-12-05 19:47:48.399340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.424 ms, result 0 00:29:30.147 00:29:30.147 00:29:30.447 19:47:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:32.364 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:32.364 19:47:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:32.364 [2024-12-05 19:47:51.366807] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:29:32.364 [2024-12-05 19:47:51.367059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80633 ] 00:29:32.625 [2024-12-05 19:47:51.527241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.625 [2024-12-05 19:47:51.626680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.885 [2024-12-05 19:47:51.885165] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:32.885 [2024-12-05 19:47:51.885231] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:33.149 [2024-12-05 19:47:52.046213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.046260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:33.149 [2024-12-05 19:47:52.046273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:33.149 [2024-12-05 19:47:52.046281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.046331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.046344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:33.149 [2024-12-05 19:47:52.046353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:33.149 [2024-12-05 19:47:52.046360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.046377] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:33.149 [2024-12-05 19:47:52.047185] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:33.149 [2024-12-05 19:47:52.047249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.047258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:33.149 [2024-12-05 19:47:52.047267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:29:33.149 [2024-12-05 19:47:52.047274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.048363] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:33.149 [2024-12-05 19:47:52.061632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.061667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:33.149 [2024-12-05 19:47:52.061679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.270 ms 00:29:33.149 [2024-12-05 19:47:52.061687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.061749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.061759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:33.149 [2024-12-05 19:47:52.061767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:29:33.149 [2024-12-05 19:47:52.061775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.067229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.067278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:33.149 [2024-12-05 19:47:52.067297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.374 ms 00:29:33.149 [2024-12-05 19:47:52.067316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.067423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.067437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:33.149 [2024-12-05 19:47:52.067451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:29:33.149 [2024-12-05 19:47:52.067463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.067515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.067530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:33.149 [2024-12-05 19:47:52.067544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:29:33.149 [2024-12-05 19:47:52.067556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.067594] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:33.149 [2024-12-05 19:47:52.073524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.073691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:33.149 [2024-12-05 19:47:52.073720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.936 ms 00:29:33.149 [2024-12-05 19:47:52.073733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.073783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.073795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:33.149 [2024-12-05 19:47:52.073808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:29:33.149 [2024-12-05 19:47:52.073819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.073910] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:33.149 [2024-12-05 19:47:52.073940] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:33.149 [2024-12-05 19:47:52.073990] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:33.149 [2024-12-05 19:47:52.074016] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:33.149 [2024-12-05 19:47:52.074189] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:33.149 [2024-12-05 19:47:52.074206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:33.149 [2024-12-05 19:47:52.074222] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:33.149 [2024-12-05 19:47:52.074238] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:33.149 [2024-12-05 19:47:52.074252] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:33.149 [2024-12-05 19:47:52.074264] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:33.149 [2024-12-05 19:47:52.074276] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:33.149 [2024-12-05 19:47:52.074291] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:33.149 [2024-12-05 19:47:52.074303] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:33.149 [2024-12-05 19:47:52.074316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.074328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:33.149 [2024-12-05 19:47:52.074341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:29:33.149 [2024-12-05 19:47:52.074353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.074477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.149 [2024-12-05 19:47:52.074489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:33.149 [2024-12-05 19:47:52.074502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:29:33.149 [2024-12-05 19:47:52.074513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.149 [2024-12-05 19:47:52.074660] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:33.149 [2024-12-05 19:47:52.074676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:33.149 [2024-12-05 19:47:52.074689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.149 [2024-12-05 19:47:52.074701] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.149 [2024-12-05 19:47:52.074713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:33.149 [2024-12-05 19:47:52.074724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:33.149 [2024-12-05 19:47:52.074735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:33.149 [2024-12-05 19:47:52.074746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:33.149 [2024-12-05 19:47:52.074758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:33.149 [2024-12-05 19:47:52.074769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.149 [2024-12-05 19:47:52.074780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:33.149 [2024-12-05 19:47:52.074790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:33.149 [2024-12-05 19:47:52.074801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:33.149 [2024-12-05 19:47:52.074818] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:33.149 [2024-12-05 19:47:52.074830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:33.149 [2024-12-05 19:47:52.074840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.149 [2024-12-05 19:47:52.074851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:33.149 [2024-12-05 19:47:52.074862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:33.149 [2024-12-05 19:47:52.074873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.149 [2024-12-05 19:47:52.074884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:33.149 [2024-12-05 19:47:52.074895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:33.149 [2024-12-05 19:47:52.074906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.149 [2024-12-05 19:47:52.074917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:33.149 [2024-12-05 19:47:52.074928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:33.149 [2024-12-05 19:47:52.074940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.149 [2024-12-05 19:47:52.074951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:33.150 [2024-12-05 19:47:52.074961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:33.150 [2024-12-05 19:47:52.074972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.150 [2024-12-05 19:47:52.074983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:33.150 [2024-12-05 19:47:52.074993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:33.150 [2024-12-05 19:47:52.075004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:33.150 [2024-12-05 19:47:52.075015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:33.150 [2024-12-05 19:47:52.075026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:33.150 [2024-12-05 19:47:52.075037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.150 [2024-12-05 19:47:52.075047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:33.150 [2024-12-05 19:47:52.075058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:33.150 [2024-12-05 19:47:52.075068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:33.150 [2024-12-05 19:47:52.075079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:33.150 [2024-12-05 19:47:52.075090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:33.150 [2024-12-05 19:47:52.075101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.150 [2024-12-05 19:47:52.075111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:33.150 [2024-12-05 19:47:52.075122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:33.150 [2024-12-05 19:47:52.075149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.150 [2024-12-05 19:47:52.075161] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:33.150 [2024-12-05 19:47:52.075173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:33.150 [2024-12-05 19:47:52.075185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:33.150 [2024-12-05 19:47:52.075196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:33.150 [2024-12-05 19:47:52.075209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:33.150 [2024-12-05 19:47:52.075221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:33.150 [2024-12-05 19:47:52.075231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:33.150 [2024-12-05 19:47:52.075243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:33.150 [2024-12-05 19:47:52.075253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:33.150 [2024-12-05 19:47:52.075264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:33.150 [2024-12-05 19:47:52.075278] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:33.150 [2024-12-05 19:47:52.075292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.150 [2024-12-05 19:47:52.075308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:33.150 [2024-12-05 19:47:52.075322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:33.150 [2024-12-05 19:47:52.075334] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:33.150 [2024-12-05 19:47:52.075346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:33.150 [2024-12-05 19:47:52.075358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:33.150 [2024-12-05 19:47:52.075370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:33.150 [2024-12-05 19:47:52.075382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:33.150 [2024-12-05 19:47:52.075394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:33.150 [2024-12-05 19:47:52.075406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:33.150 [2024-12-05 19:47:52.075418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:33.150 [2024-12-05 19:47:52.075430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:33.150 [2024-12-05 19:47:52.075442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:33.150 [2024-12-05 19:47:52.075453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:33.150 [2024-12-05 19:47:52.075465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:33.150 [2024-12-05 19:47:52.075477] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:33.150 [2024-12-05 19:47:52.075491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:33.150 [2024-12-05 19:47:52.075505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:33.150 [2024-12-05 19:47:52.075517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:33.150 [2024-12-05 19:47:52.075529] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:33.150 [2024-12-05 19:47:52.075540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:33.150 [2024-12-05 19:47:52.075553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.150 [2024-12-05 19:47:52.075564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:33.150 [2024-12-05 19:47:52.075577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:29:33.150 [2024-12-05 19:47:52.075588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.150 [2024-12-05 19:47:52.114765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.150 [2024-12-05 19:47:52.114945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:33.150 [2024-12-05 19:47:52.115005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.110 ms 00:29:33.150 [2024-12-05 19:47:52.115035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.150 [2024-12-05 19:47:52.115163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.150 [2024-12-05 19:47:52.115189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:33.150 [2024-12-05 19:47:52.115213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:29:33.150 [2024-12-05 19:47:52.115234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.154002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.154208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:33.412 [2024-12-05 19:47:52.154274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.678 ms 00:29:33.412 [2024-12-05 19:47:52.154299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.154370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.154394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:33.412 [2024-12-05 19:47:52.154420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:33.412 [2024-12-05 19:47:52.154438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.154864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.154917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:33.412 [2024-12-05 19:47:52.154939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.335 ms 00:29:33.412 [2024-12-05 19:47:52.154959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.155103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.155136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:33.412 [2024-12-05 19:47:52.155163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:29:33.412 [2024-12-05 19:47:52.155182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.169955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.170118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:33.412 [2024-12-05 19:47:52.170188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.698 ms 00:29:33.412 [2024-12-05 19:47:52.170212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.183388] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:33.412 [2024-12-05 19:47:52.183595] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:33.412 [2024-12-05 19:47:52.183765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.183803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:33.412 [2024-12-05 19:47:52.183842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.410 ms 00:29:33.412 [2024-12-05 19:47:52.183918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.209099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.209253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:33.412 [2024-12-05 19:47:52.209306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.107 ms 00:29:33.412 [2024-12-05 19:47:52.209329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.221650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.221771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:33.412 [2024-12-05 19:47:52.221819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.262 ms 00:29:33.412 [2024-12-05 19:47:52.221850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.233325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.233436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:33.412 [2024-12-05 19:47:52.233485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.434 ms 00:29:33.412 [2024-12-05 19:47:52.233506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.234145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.234234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:33.412 [2024-12-05 19:47:52.234302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:29:33.412 [2024-12-05 19:47:52.234325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.290939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.291109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:33.412 [2024-12-05 19:47:52.291194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.581 ms 00:29:33.412 [2024-12-05 19:47:52.291218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.301624] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:33.412 [2024-12-05 19:47:52.304367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.304472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:33.412 [2024-12-05 19:47:52.304525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.099 ms 00:29:33.412 [2024-12-05 19:47:52.304546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.304656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.304685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:33.412 [2024-12-05 19:47:52.304707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:29:33.412 [2024-12-05 19:47:52.304758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.305369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.305466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:33.412 [2024-12-05 19:47:52.305515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:29:33.412 [2024-12-05 19:47:52.305536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.305577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.305682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:33.412 [2024-12-05 19:47:52.305706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:29:33.412 [2024-12-05 19:47:52.305725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.305772] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:33.412 [2024-12-05 19:47:52.305839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.305862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:33.412 [2024-12-05 19:47:52.305881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:29:33.412 [2024-12-05 19:47:52.305899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.329457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.329586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:33.412 [2024-12-05 19:47:52.329643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.525 ms 00:29:33.412 [2024-12-05 19:47:52.329665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.329786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:33.412 [2024-12-05 19:47:52.329839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:33.412 [2024-12-05 19:47:52.329859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:29:33.412 [2024-12-05 19:47:52.329878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:33.412 [2024-12-05 19:47:52.330811] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.196 ms, result 0 00:29:34.797  [2024-12-05T19:47:54.748Z] Copying: 15/1024 [MB] (15 MBps) [2024-12-05T19:47:55.686Z] Copying: 42/1024 [MB] (26 MBps) [2024-12-05T19:47:56.630Z] Copying: 67/1024 [MB] (25 MBps) [2024-12-05T19:47:57.574Z] Copying: 91/1024 [MB] (23 MBps) [2024-12-05T19:47:58.516Z] Copying: 116/1024 [MB] (25 MBps) [2024-12-05T19:47:59.899Z] Copying: 139/1024 [MB] (22 MBps) [2024-12-05T19:48:00.843Z] Copying: 164/1024 [MB] (25 MBps) [2024-12-05T19:48:01.783Z] Copying: 187/1024 [MB] (22 MBps) [2024-12-05T19:48:02.777Z] Copying: 234/1024 [MB] (47 MBps) [2024-12-05T19:48:03.721Z] Copying: 279/1024 [MB] (45 MBps) [2024-12-05T19:48:04.666Z] Copying: 323/1024 [MB] (43 MBps) [2024-12-05T19:48:05.610Z] Copying: 364/1024 [MB] (40 MBps) [2024-12-05T19:48:06.553Z] Copying: 405/1024 [MB] (41 MBps) [2024-12-05T19:48:07.940Z] Copying: 448/1024 [MB] (42 MBps) [2024-12-05T19:48:08.514Z] Copying: 492/1024 [MB] (44 MBps) [2024-12-05T19:48:09.900Z] Copying: 526/1024 [MB] (33 MBps) [2024-12-05T19:48:10.841Z] Copying: 558/1024 [MB] (31 MBps) [2024-12-05T19:48:11.798Z] Copying: 601/1024 [MB] (42 MBps) [2024-12-05T19:48:12.740Z] Copying: 643/1024 [MB] (42 MBps) [2024-12-05T19:48:13.686Z] Copying: 686/1024 [MB] (43 MBps) [2024-12-05T19:48:14.627Z] Copying: 728/1024 [MB] (41 MBps) [2024-12-05T19:48:15.570Z] Copying: 763/1024 [MB] (35 MBps) [2024-12-05T19:48:16.515Z] Copying: 776/1024 [MB] (13 MBps) [2024-12-05T19:48:17.910Z] Copying: 791/1024 [MB] (14 MBps) [2024-12-05T19:48:18.849Z] Copying: 805/1024 [MB] (13 MBps) [2024-12-05T19:48:19.793Z] Copying: 823/1024 [MB] (18 MBps) [2024-12-05T19:48:20.761Z] Copying: 836/1024 [MB] (12 MBps) [2024-12-05T19:48:21.704Z] Copying: 849/1024 [MB] (12 MBps) [2024-12-05T19:48:22.646Z] Copying: 864/1024 [MB] (15 MBps) [2024-12-05T19:48:23.589Z] Copying: 878/1024 [MB] (14 MBps) [2024-12-05T19:48:24.533Z] Copying: 900/1024 [MB] (21 MBps) [2024-12-05T19:48:25.917Z] Copying: 917/1024 [MB] (17 MBps) [2024-12-05T19:48:26.860Z] Copying: 948824/1048576 [kB] (9292 kBps) [2024-12-05T19:48:27.805Z] Copying: 937/1024 [MB] (10 MBps) [2024-12-05T19:48:28.747Z] Copying: 950/1024 [MB] (13 MBps) [2024-12-05T19:48:29.693Z] Copying: 962/1024 [MB] (11 MBps) [2024-12-05T19:48:30.636Z] Copying: 972/1024 [MB] (10 MBps) [2024-12-05T19:48:31.583Z] Copying: 1006028/1048576 [kB] (10124 kBps) [2024-12-05T19:48:32.547Z] Copying: 1015216/1048576 [kB] (9188 kBps) [2024-12-05T19:48:33.934Z] Copying: 1025272/1048576 [kB] (10056 kBps) [2024-12-05T19:48:34.506Z] Copying: 1034688/1048576 [kB] (9416 kBps) [2024-12-05T19:48:35.103Z] Copying: 1044588/1048576 [kB] (9900 kBps) [2024-12-05T19:48:35.103Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-05 19:48:35.077026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.097 [2024-12-05 19:48:35.077328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:16.097 [2024-12-05 19:48:35.077357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:16.097 [2024-12-05 19:48:35.077366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.097 [2024-12-05 19:48:35.077401] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:16.097 [2024-12-05 19:48:35.081110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.097 [2024-12-05 19:48:35.081175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:16.097 [2024-12-05 19:48:35.081190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:30:16.097 [2024-12-05 19:48:35.081200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.097 [2024-12-05 19:48:35.081961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.097 [2024-12-05 19:48:35.081989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:16.097 [2024-12-05 19:48:35.082002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:30:16.097 [2024-12-05 19:48:35.082013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.097 [2024-12-05 19:48:35.087448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.097 [2024-12-05 19:48:35.087585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:16.097 [2024-12-05 19:48:35.087657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.413 ms 00:30:16.097 [2024-12-05 19:48:35.087680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.097 [2024-12-05 19:48:35.094481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.097 [2024-12-05 19:48:35.094523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:16.097 [2024-12-05 19:48:35.094535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.771 ms 00:30:16.097 [2024-12-05 19:48:35.094543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.358 [2024-12-05 19:48:35.121954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.358 [2024-12-05 19:48:35.122005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:16.358 [2024-12-05 19:48:35.122019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.338 ms 00:30:16.358 [2024-12-05 19:48:35.122027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.358 [2024-12-05 19:48:35.137690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.358 [2024-12-05 19:48:35.137740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:16.358 [2024-12-05 19:48:35.137754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.614 ms 00:30:16.358 [2024-12-05 19:48:35.137765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.358 [2024-12-05 19:48:35.143012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.358 [2024-12-05 19:48:35.143060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:16.358 [2024-12-05 19:48:35.143072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.169 ms 00:30:16.358 [2024-12-05 19:48:35.143080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.358 [2024-12-05 19:48:35.168667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.358 [2024-12-05 19:48:35.168879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:16.358 [2024-12-05 19:48:35.168901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.570 ms 00:30:16.358 [2024-12-05 19:48:35.168911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.358 [2024-12-05 19:48:35.194427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.358 [2024-12-05 19:48:35.194480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:16.358 [2024-12-05 19:48:35.194494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.475 ms 00:30:16.358 [2024-12-05 19:48:35.194503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.358 [2024-12-05 19:48:35.219431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.358 [2024-12-05 19:48:35.219616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:16.358 [2024-12-05 19:48:35.219637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.878 ms 00:30:16.358 [2024-12-05 19:48:35.219645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.358 [2024-12-05 19:48:35.244454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.358 [2024-12-05 19:48:35.244500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:16.358 [2024-12-05 19:48:35.244514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.725 ms 00:30:16.358 [2024-12-05 19:48:35.244522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.358 [2024-12-05 19:48:35.244566] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:16.358 [2024-12-05 19:48:35.244590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:16.358 [2024-12-05 19:48:35.244605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:16.358 [2024-12-05 19:48:35.244614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:16.358 [2024-12-05 19:48:35.244762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.244995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:16.359 [2024-12-05 19:48:35.245429] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:16.359 [2024-12-05 19:48:35.245438] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a6f49eaf-f4d0-4462-bb83-644057542bcc 00:30:16.359 [2024-12-05 19:48:35.245446] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:16.359 [2024-12-05 19:48:35.245454] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:16.359 [2024-12-05 19:48:35.245462] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:16.359 [2024-12-05 19:48:35.245484] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:16.359 [2024-12-05 19:48:35.245499] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:16.359 [2024-12-05 19:48:35.245508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:16.359 [2024-12-05 19:48:35.245516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:16.359 [2024-12-05 19:48:35.245523] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:16.359 [2024-12-05 19:48:35.245529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:16.359 [2024-12-05 19:48:35.245537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.359 [2024-12-05 19:48:35.245545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:16.359 [2024-12-05 19:48:35.245555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:30:16.359 [2024-12-05 19:48:35.245564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.359 [2024-12-05 19:48:35.259168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.359 [2024-12-05 19:48:35.259211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:16.359 [2024-12-05 19:48:35.259224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.583 ms 00:30:16.359 [2024-12-05 19:48:35.259233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.359 [2024-12-05 19:48:35.259639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:16.359 [2024-12-05 19:48:35.259662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:16.359 [2024-12-05 19:48:35.259672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.366 ms 00:30:16.359 [2024-12-05 19:48:35.259680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.359 [2024-12-05 19:48:35.295770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.359 [2024-12-05 19:48:35.295843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:16.359 [2024-12-05 19:48:35.295858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.359 [2024-12-05 19:48:35.295867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.359 [2024-12-05 19:48:35.295954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.359 [2024-12-05 19:48:35.295970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:16.359 [2024-12-05 19:48:35.295979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.359 [2024-12-05 19:48:35.295987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.359 [2024-12-05 19:48:35.296092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.360 [2024-12-05 19:48:35.296105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:16.360 [2024-12-05 19:48:35.296113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.360 [2024-12-05 19:48:35.296121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.360 [2024-12-05 19:48:35.296165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.360 [2024-12-05 19:48:35.296175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:16.360 [2024-12-05 19:48:35.296188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.360 [2024-12-05 19:48:35.296197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.621 [2024-12-05 19:48:35.380834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.621 [2024-12-05 19:48:35.380899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:16.621 [2024-12-05 19:48:35.380913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.621 [2024-12-05 19:48:35.380921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.621 [2024-12-05 19:48:35.447301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.621 [2024-12-05 19:48:35.447371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:16.621 [2024-12-05 19:48:35.447383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.621 [2024-12-05 19:48:35.447392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.621 [2024-12-05 19:48:35.447451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.621 [2024-12-05 19:48:35.447461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:16.621 [2024-12-05 19:48:35.447469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.621 [2024-12-05 19:48:35.447477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.621 [2024-12-05 19:48:35.447530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.621 [2024-12-05 19:48:35.447540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:16.621 [2024-12-05 19:48:35.447549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.621 [2024-12-05 19:48:35.447560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.621 [2024-12-05 19:48:35.447657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.621 [2024-12-05 19:48:35.447668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:16.621 [2024-12-05 19:48:35.447676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.621 [2024-12-05 19:48:35.447684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.621 [2024-12-05 19:48:35.447714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.621 [2024-12-05 19:48:35.447723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:16.621 [2024-12-05 19:48:35.447731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.621 [2024-12-05 19:48:35.447739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.621 [2024-12-05 19:48:35.447778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.621 [2024-12-05 19:48:35.447788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:16.621 [2024-12-05 19:48:35.447797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.621 [2024-12-05 19:48:35.447805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.621 [2024-12-05 19:48:35.447846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:16.621 [2024-12-05 19:48:35.447856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:16.621 [2024-12-05 19:48:35.447864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:16.621 [2024-12-05 19:48:35.447875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:16.621 [2024-12-05 19:48:35.447996] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 370.943 ms, result 0 00:30:18.005 00:30:18.005 00:30:18.005 19:48:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:19.908 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:19.908 19:48:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:19.908 19:48:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:19.908 19:48:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:19.908 19:48:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:20.166 19:48:38 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:20.166 19:48:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:20.166 19:48:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:20.166 Process with pid 79304 is not found 00:30:20.166 19:48:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 79304 00:30:20.166 19:48:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 79304 ']' 00:30:20.166 19:48:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 79304 00:30:20.166 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79304) - No such process 00:30:20.166 19:48:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 79304 is not found' 00:30:20.166 19:48:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:20.424 Remove shared memory files 00:30:20.424 19:48:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:20.424 19:48:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:20.424 19:48:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:20.424 19:48:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:20.424 19:48:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:20.424 19:48:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:20.424 19:48:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:20.424 ************************************ 00:30:20.424 END TEST ftl_dirty_shutdown 00:30:20.424 ************************************ 00:30:20.424 00:30:20.424 real 2m51.762s 00:30:20.424 user 3m13.120s 00:30:20.424 sys 0m25.540s 00:30:20.424 19:48:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:20.424 19:48:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:20.682 19:48:39 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:20.682 19:48:39 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:20.682 19:48:39 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:20.682 19:48:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:20.682 ************************************ 00:30:20.682 START TEST ftl_upgrade_shutdown 00:30:20.682 ************************************ 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:20.683 * Looking for test storage... 00:30:20.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:20.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.683 --rc genhtml_branch_coverage=1 00:30:20.683 --rc genhtml_function_coverage=1 00:30:20.683 --rc genhtml_legend=1 00:30:20.683 --rc geninfo_all_blocks=1 00:30:20.683 --rc geninfo_unexecuted_blocks=1 00:30:20.683 00:30:20.683 ' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:20.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.683 --rc genhtml_branch_coverage=1 00:30:20.683 --rc genhtml_function_coverage=1 00:30:20.683 --rc genhtml_legend=1 00:30:20.683 --rc geninfo_all_blocks=1 00:30:20.683 --rc geninfo_unexecuted_blocks=1 00:30:20.683 00:30:20.683 ' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:20.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.683 --rc genhtml_branch_coverage=1 00:30:20.683 --rc genhtml_function_coverage=1 00:30:20.683 --rc genhtml_legend=1 00:30:20.683 --rc geninfo_all_blocks=1 00:30:20.683 --rc geninfo_unexecuted_blocks=1 00:30:20.683 00:30:20.683 ' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:20.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:20.683 --rc genhtml_branch_coverage=1 00:30:20.683 --rc genhtml_function_coverage=1 00:30:20.683 --rc genhtml_legend=1 00:30:20.683 --rc geninfo_all_blocks=1 00:30:20.683 --rc geninfo_unexecuted_blocks=1 00:30:20.683 00:30:20.683 ' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81195 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:20.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81195 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81195 ']' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:20.683 19:48:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:20.941 [2024-12-05 19:48:39.702871] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:30:20.941 [2024-12-05 19:48:39.702994] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81195 ] 00:30:20.941 [2024-12-05 19:48:39.864702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.198 [2024-12-05 19:48:39.967197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:21.803 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:22.061 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:22.061 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:22.061 19:48:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:22.061 19:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:22.061 19:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:22.061 19:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:22.061 19:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:22.061 19:48:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:22.061 19:48:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:22.061 { 00:30:22.061 "name": "basen1", 00:30:22.061 "aliases": [ 00:30:22.061 "bf4bee71-e41a-403b-a8e6-324b2b64624b" 00:30:22.061 ], 00:30:22.061 "product_name": "NVMe disk", 00:30:22.061 "block_size": 4096, 00:30:22.061 "num_blocks": 1310720, 00:30:22.061 "uuid": "bf4bee71-e41a-403b-a8e6-324b2b64624b", 00:30:22.061 "numa_id": -1, 00:30:22.061 "assigned_rate_limits": { 00:30:22.061 "rw_ios_per_sec": 0, 00:30:22.061 "rw_mbytes_per_sec": 0, 00:30:22.061 "r_mbytes_per_sec": 0, 00:30:22.061 "w_mbytes_per_sec": 0 00:30:22.061 }, 00:30:22.061 "claimed": true, 00:30:22.061 "claim_type": "read_many_write_one", 00:30:22.061 "zoned": false, 00:30:22.061 "supported_io_types": { 00:30:22.061 "read": true, 00:30:22.061 "write": true, 00:30:22.061 "unmap": true, 00:30:22.061 "flush": true, 00:30:22.061 "reset": true, 00:30:22.061 "nvme_admin": true, 00:30:22.061 "nvme_io": true, 00:30:22.061 "nvme_io_md": false, 00:30:22.061 "write_zeroes": true, 00:30:22.061 "zcopy": false, 00:30:22.061 "get_zone_info": false, 00:30:22.061 "zone_management": false, 00:30:22.061 "zone_append": false, 00:30:22.061 "compare": true, 00:30:22.061 "compare_and_write": false, 00:30:22.061 "abort": true, 00:30:22.061 "seek_hole": false, 00:30:22.061 "seek_data": false, 00:30:22.061 "copy": true, 00:30:22.061 "nvme_iov_md": false 00:30:22.061 }, 00:30:22.061 "driver_specific": { 00:30:22.061 "nvme": [ 00:30:22.061 { 00:30:22.061 "pci_address": "0000:00:11.0", 00:30:22.061 "trid": { 00:30:22.061 "trtype": "PCIe", 00:30:22.061 "traddr": "0000:00:11.0" 00:30:22.061 }, 00:30:22.061 "ctrlr_data": { 00:30:22.061 "cntlid": 0, 00:30:22.061 "vendor_id": "0x1b36", 00:30:22.061 "model_number": "QEMU NVMe Ctrl", 00:30:22.061 "serial_number": "12341", 00:30:22.061 "firmware_revision": "8.0.0", 00:30:22.061 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:22.061 "oacs": { 00:30:22.061 "security": 0, 00:30:22.061 "format": 1, 00:30:22.061 "firmware": 0, 00:30:22.061 "ns_manage": 1 00:30:22.061 }, 00:30:22.061 "multi_ctrlr": false, 00:30:22.061 "ana_reporting": false 00:30:22.061 }, 00:30:22.061 "vs": { 00:30:22.061 "nvme_version": "1.4" 00:30:22.061 }, 00:30:22.061 "ns_data": { 00:30:22.061 "id": 1, 00:30:22.061 "can_share": false 00:30:22.061 } 00:30:22.061 } 00:30:22.061 ], 00:30:22.061 "mp_policy": "active_passive" 00:30:22.061 } 00:30:22.061 } 00:30:22.061 ]' 00:30:22.061 19:48:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:22.318 19:48:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:22.318 19:48:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:22.318 19:48:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:22.318 19:48:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:22.318 19:48:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:22.319 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:22.319 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:22.319 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:22.319 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:22.319 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:22.576 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=55f13410-d005-4d58-9a05-cd896861b559 00:30:22.576 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:22.576 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55f13410-d005-4d58-9a05-cd896861b559 00:30:22.834 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:22.834 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=480c18bb-309f-49ce-98ea-ba49f978055b 00:30:22.834 19:48:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 480c18bb-309f-49ce-98ea-ba49f978055b 00:30:23.092 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=dcc48feb-70f8-4194-9911-a809bd8fb933 00:30:23.092 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z dcc48feb-70f8-4194-9911-a809bd8fb933 ]] 00:30:23.092 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 dcc48feb-70f8-4194-9911-a809bd8fb933 5120 00:30:23.092 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:23.092 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:23.093 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=dcc48feb-70f8-4194-9911-a809bd8fb933 00:30:23.093 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:23.093 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size dcc48feb-70f8-4194-9911-a809bd8fb933 00:30:23.093 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=dcc48feb-70f8-4194-9911-a809bd8fb933 00:30:23.093 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:23.093 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:23.093 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:23.093 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dcc48feb-70f8-4194-9911-a809bd8fb933 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:23.351 { 00:30:23.351 "name": "dcc48feb-70f8-4194-9911-a809bd8fb933", 00:30:23.351 "aliases": [ 00:30:23.351 "lvs/basen1p0" 00:30:23.351 ], 00:30:23.351 "product_name": "Logical Volume", 00:30:23.351 "block_size": 4096, 00:30:23.351 "num_blocks": 5242880, 00:30:23.351 "uuid": "dcc48feb-70f8-4194-9911-a809bd8fb933", 00:30:23.351 "assigned_rate_limits": { 00:30:23.351 "rw_ios_per_sec": 0, 00:30:23.351 "rw_mbytes_per_sec": 0, 00:30:23.351 "r_mbytes_per_sec": 0, 00:30:23.351 "w_mbytes_per_sec": 0 00:30:23.351 }, 00:30:23.351 "claimed": false, 00:30:23.351 "zoned": false, 00:30:23.351 "supported_io_types": { 00:30:23.351 "read": true, 00:30:23.351 "write": true, 00:30:23.351 "unmap": true, 00:30:23.351 "flush": false, 00:30:23.351 "reset": true, 00:30:23.351 "nvme_admin": false, 00:30:23.351 "nvme_io": false, 00:30:23.351 "nvme_io_md": false, 00:30:23.351 "write_zeroes": true, 00:30:23.351 "zcopy": false, 00:30:23.351 "get_zone_info": false, 00:30:23.351 "zone_management": false, 00:30:23.351 "zone_append": false, 00:30:23.351 "compare": false, 00:30:23.351 "compare_and_write": false, 00:30:23.351 "abort": false, 00:30:23.351 "seek_hole": true, 00:30:23.351 "seek_data": true, 00:30:23.351 "copy": false, 00:30:23.351 "nvme_iov_md": false 00:30:23.351 }, 00:30:23.351 "driver_specific": { 00:30:23.351 "lvol": { 00:30:23.351 "lvol_store_uuid": "480c18bb-309f-49ce-98ea-ba49f978055b", 00:30:23.351 "base_bdev": "basen1", 00:30:23.351 "thin_provision": true, 00:30:23.351 "num_allocated_clusters": 0, 00:30:23.351 "snapshot": false, 00:30:23.351 "clone": false, 00:30:23.351 "esnap_clone": false 00:30:23.351 } 00:30:23.351 } 00:30:23.351 } 00:30:23.351 ]' 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:23.351 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:23.608 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:23.608 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:23.608 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:23.870 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:23.870 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:23.870 19:48:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d dcc48feb-70f8-4194-9911-a809bd8fb933 -c cachen1p0 --l2p_dram_limit 2 00:30:24.129 [2024-12-05 19:48:42.938448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.938508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:24.129 [2024-12-05 19:48:42.938524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:24.129 [2024-12-05 19:48:42.938533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.938590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.938600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:24.129 [2024-12-05 19:48:42.938610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:30:24.129 [2024-12-05 19:48:42.938618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.938639] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:24.129 [2024-12-05 19:48:42.939456] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:24.129 [2024-12-05 19:48:42.939491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.939499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:24.129 [2024-12-05 19:48:42.939512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.854 ms 00:30:24.129 [2024-12-05 19:48:42.939519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.939582] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 30d6ecf8-6f20-4111-8db2-0a3086068cce 00:30:24.129 [2024-12-05 19:48:42.940709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.940742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:24.129 [2024-12-05 19:48:42.940752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:24.129 [2024-12-05 19:48:42.940762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.946110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.946165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:24.129 [2024-12-05 19:48:42.946175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.307 ms 00:30:24.129 [2024-12-05 19:48:42.946184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.946223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.946234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:24.129 [2024-12-05 19:48:42.946242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:30:24.129 [2024-12-05 19:48:42.946253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.946295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.946306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:24.129 [2024-12-05 19:48:42.946317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:24.129 [2024-12-05 19:48:42.946326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.946348] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:24.129 [2024-12-05 19:48:42.949965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.949998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:24.129 [2024-12-05 19:48:42.950011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.620 ms 00:30:24.129 [2024-12-05 19:48:42.950019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.950049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.950057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:24.129 [2024-12-05 19:48:42.950067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:24.129 [2024-12-05 19:48:42.950075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.950108] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:24.129 [2024-12-05 19:48:42.950263] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:24.129 [2024-12-05 19:48:42.950280] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:24.129 [2024-12-05 19:48:42.950291] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:24.129 [2024-12-05 19:48:42.950303] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:24.129 [2024-12-05 19:48:42.950311] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:24.129 [2024-12-05 19:48:42.950321] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:24.129 [2024-12-05 19:48:42.950327] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:24.129 [2024-12-05 19:48:42.950341] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:24.129 [2024-12-05 19:48:42.950348] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:24.129 [2024-12-05 19:48:42.950357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.950365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:24.129 [2024-12-05 19:48:42.950378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:30:24.129 [2024-12-05 19:48:42.950390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.950485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.129 [2024-12-05 19:48:42.950500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:24.129 [2024-12-05 19:48:42.950517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:24.129 [2024-12-05 19:48:42.950529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.129 [2024-12-05 19:48:42.950659] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:24.129 [2024-12-05 19:48:42.950680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:24.129 [2024-12-05 19:48:42.950690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:24.129 [2024-12-05 19:48:42.950698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.129 [2024-12-05 19:48:42.950708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:24.129 [2024-12-05 19:48:42.950715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:24.129 [2024-12-05 19:48:42.950724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:24.129 [2024-12-05 19:48:42.950732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:24.129 [2024-12-05 19:48:42.950740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:24.129 [2024-12-05 19:48:42.950747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.129 [2024-12-05 19:48:42.950757] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:24.129 [2024-12-05 19:48:42.950764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:24.129 [2024-12-05 19:48:42.950772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.129 [2024-12-05 19:48:42.950779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:24.129 [2024-12-05 19:48:42.950789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:24.129 [2024-12-05 19:48:42.950795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.129 [2024-12-05 19:48:42.950805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:24.129 [2024-12-05 19:48:42.950812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:24.129 [2024-12-05 19:48:42.950821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.129 [2024-12-05 19:48:42.950828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:24.129 [2024-12-05 19:48:42.950836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:24.129 [2024-12-05 19:48:42.950843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:24.129 [2024-12-05 19:48:42.950851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:24.129 [2024-12-05 19:48:42.950858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:24.129 [2024-12-05 19:48:42.950866] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:24.129 [2024-12-05 19:48:42.950874] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:24.129 [2024-12-05 19:48:42.950882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:24.129 [2024-12-05 19:48:42.950889] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:24.129 [2024-12-05 19:48:42.950897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:24.129 [2024-12-05 19:48:42.950904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:24.129 [2024-12-05 19:48:42.950915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:24.129 [2024-12-05 19:48:42.950922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:24.129 [2024-12-05 19:48:42.950932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:24.129 [2024-12-05 19:48:42.950938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.129 [2024-12-05 19:48:42.950950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:24.129 [2024-12-05 19:48:42.950961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:24.129 [2024-12-05 19:48:42.950977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.129 [2024-12-05 19:48:42.950989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:24.129 [2024-12-05 19:48:42.951003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:24.130 [2024-12-05 19:48:42.951013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.130 [2024-12-05 19:48:42.951022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:24.130 [2024-12-05 19:48:42.951029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:24.130 [2024-12-05 19:48:42.951037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.130 [2024-12-05 19:48:42.951043] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:24.130 [2024-12-05 19:48:42.951053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:24.130 [2024-12-05 19:48:42.951059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:24.130 [2024-12-05 19:48:42.951068] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.130 [2024-12-05 19:48:42.951076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:24.130 [2024-12-05 19:48:42.951086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:24.130 [2024-12-05 19:48:42.951093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:24.130 [2024-12-05 19:48:42.951102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:24.130 [2024-12-05 19:48:42.951108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:24.130 [2024-12-05 19:48:42.951116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:24.130 [2024-12-05 19:48:42.951137] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:24.130 [2024-12-05 19:48:42.951152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:24.130 [2024-12-05 19:48:42.951174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:24.130 [2024-12-05 19:48:42.951213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:24.130 [2024-12-05 19:48:42.951229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:24.130 [2024-12-05 19:48:42.951236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:24.130 [2024-12-05 19:48:42.951247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:24.130 [2024-12-05 19:48:42.951304] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:24.130 [2024-12-05 19:48:42.951314] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:24.130 [2024-12-05 19:48:42.951330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:24.130 [2024-12-05 19:48:42.951337] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:24.130 [2024-12-05 19:48:42.951346] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:24.130 [2024-12-05 19:48:42.951354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.130 [2024-12-05 19:48:42.951363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:24.130 [2024-12-05 19:48:42.951371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.767 ms 00:30:24.130 [2024-12-05 19:48:42.951379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.130 [2024-12-05 19:48:42.951420] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:24.130 [2024-12-05 19:48:42.951444] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:29.489 [2024-12-05 19:48:47.793056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.793114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:29.489 [2024-12-05 19:48:47.793138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4841.619 ms 00:30:29.489 [2024-12-05 19:48:47.793149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.818498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.818571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:29.489 [2024-12-05 19:48:47.818584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.133 ms 00:30:29.489 [2024-12-05 19:48:47.818594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.818679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.818692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:29.489 [2024-12-05 19:48:47.818700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:29.489 [2024-12-05 19:48:47.818714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.848798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.848837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:29.489 [2024-12-05 19:48:47.848848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.037 ms 00:30:29.489 [2024-12-05 19:48:47.848858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.848892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.848904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:29.489 [2024-12-05 19:48:47.848913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:29.489 [2024-12-05 19:48:47.848921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.849285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.849308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:29.489 [2024-12-05 19:48:47.849323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.312 ms 00:30:29.489 [2024-12-05 19:48:47.849332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.849372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.849386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:29.489 [2024-12-05 19:48:47.849399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:29.489 [2024-12-05 19:48:47.849410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.863405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.863439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:29.489 [2024-12-05 19:48:47.863448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.974 ms 00:30:29.489 [2024-12-05 19:48:47.863458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.886033] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:29.489 [2024-12-05 19:48:47.886943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.886975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:29.489 [2024-12-05 19:48:47.886991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.406 ms 00:30:29.489 [2024-12-05 19:48:47.887000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.909953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.909990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:29.489 [2024-12-05 19:48:47.910004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.909 ms 00:30:29.489 [2024-12-05 19:48:47.910012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.910435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.910470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:29.489 [2024-12-05 19:48:47.910486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:29.489 [2024-12-05 19:48:47.910494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.934044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.934079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:29.489 [2024-12-05 19:48:47.934093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.492 ms 00:30:29.489 [2024-12-05 19:48:47.934103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.957205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.957241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:29.489 [2024-12-05 19:48:47.957255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.005 ms 00:30:29.489 [2024-12-05 19:48:47.957262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:47.957871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:47.957901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:29.489 [2024-12-05 19:48:47.957911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:30:29.489 [2024-12-05 19:48:47.957921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:48.034678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:48.034728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:29.489 [2024-12-05 19:48:48.034746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 76.719 ms 00:30:29.489 [2024-12-05 19:48:48.034755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:48.058694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:48.058743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:29.489 [2024-12-05 19:48:48.058758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.864 ms 00:30:29.489 [2024-12-05 19:48:48.058766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.489 [2024-12-05 19:48:48.082234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.489 [2024-12-05 19:48:48.082280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:29.489 [2024-12-05 19:48:48.082293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.427 ms 00:30:29.490 [2024-12-05 19:48:48.082304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.490 [2024-12-05 19:48:48.105742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.490 [2024-12-05 19:48:48.105785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:29.490 [2024-12-05 19:48:48.105799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.398 ms 00:30:29.490 [2024-12-05 19:48:48.105814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.490 [2024-12-05 19:48:48.105859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.490 [2024-12-05 19:48:48.105869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:29.490 [2024-12-05 19:48:48.105882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:29.490 [2024-12-05 19:48:48.105889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.490 [2024-12-05 19:48:48.105967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.490 [2024-12-05 19:48:48.105978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:29.490 [2024-12-05 19:48:48.105988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:29.490 [2024-12-05 19:48:48.105996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.490 [2024-12-05 19:48:48.106904] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 5167.975 ms, result 0 00:30:29.490 { 00:30:29.490 "name": "ftl", 00:30:29.490 "uuid": "30d6ecf8-6f20-4111-8db2-0a3086068cce" 00:30:29.490 } 00:30:29.490 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:29.490 [2024-12-05 19:48:48.318302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.490 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:30:29.747 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:30:29.747 [2024-12-05 19:48:48.722702] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:29.747 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:30:30.004 [2024-12-05 19:48:48.919033] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:30.004 19:48:48 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:30.262 Fill FTL, iteration 1 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=81331 00:30:30.262 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:30:30.263 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 81331 /var/tmp/spdk.tgt.sock 00:30:30.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:30:30.263 19:48:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81331 ']' 00:30:30.263 19:48:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:30:30.263 19:48:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:30:30.263 19:48:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:30.263 19:48:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:30:30.263 19:48:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:30.263 19:48:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:30.519 [2024-12-05 19:48:49.330389] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:30:30.519 [2024-12-05 19:48:49.330511] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81331 ] 00:30:30.519 [2024-12-05 19:48:49.491147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.799 [2024-12-05 19:48:49.591993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.363 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:31.363 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:31.363 19:48:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:30:31.620 ftln1 00:30:31.620 19:48:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:30:31.620 19:48:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 81331 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81331 ']' 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81331 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81331 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:31.877 killing process with pid 81331 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81331' 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81331 00:30:31.877 19:48:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81331 00:30:33.249 19:48:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:33.249 19:48:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:33.249 [2024-12-05 19:48:52.244565] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:30:33.249 [2024-12-05 19:48:52.244678] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81373 ] 00:30:33.506 [2024-12-05 19:48:52.404803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.506 [2024-12-05 19:48:52.506639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.881  [2024-12-05T19:48:54.860Z] Copying: 204/1024 [MB] (204 MBps) [2024-12-05T19:48:56.232Z] Copying: 433/1024 [MB] (229 MBps) [2024-12-05T19:48:57.163Z] Copying: 659/1024 [MB] (226 MBps) [2024-12-05T19:48:57.420Z] Copying: 924/1024 [MB] (265 MBps) [2024-12-05T19:48:57.986Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:30:38.980 00:30:38.980 Calculate MD5 checksum, iteration 1 00:30:38.980 19:48:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:38.980 19:48:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:38.981 19:48:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:38.981 19:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:38.981 19:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:38.981 19:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:38.981 19:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:38.981 19:48:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:38.981 [2024-12-05 19:48:57.912033] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:30:38.981 [2024-12-05 19:48:57.912166] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81437 ] 00:30:39.238 [2024-12-05 19:48:58.072101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.238 [2024-12-05 19:48:58.170173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.652  [2024-12-05T19:49:00.220Z] Copying: 681/1024 [MB] (681 MBps) [2024-12-05T19:49:00.783Z] Copying: 1024/1024 [MB] (average 649 MBps) 00:30:41.777 00:30:41.777 19:49:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:41.777 19:49:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:44.318 Fill FTL, iteration 2 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=183c759ebe648d53907ae5842fd36daf 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:44.318 19:49:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:44.318 [2024-12-05 19:49:02.950995] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:30:44.318 [2024-12-05 19:49:02.951121] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81494 ] 00:30:44.318 [2024-12-05 19:49:03.111202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.318 [2024-12-05 19:49:03.207962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:45.698  [2024-12-05T19:49:05.662Z] Copying: 223/1024 [MB] (223 MBps) [2024-12-05T19:49:06.595Z] Copying: 464/1024 [MB] (241 MBps) [2024-12-05T19:49:07.969Z] Copying: 719/1024 [MB] (255 MBps) [2024-12-05T19:49:07.969Z] Copying: 972/1024 [MB] (253 MBps) [2024-12-05T19:49:08.535Z] Copying: 1024/1024 [MB] (average 242 MBps) 00:30:49.529 00:30:49.529 Calculate MD5 checksum, iteration 2 00:30:49.529 19:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:49.529 19:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:49.529 19:49:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:49.529 19:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:49.529 19:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:49.529 19:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:49.529 19:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:49.529 19:49:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:49.529 [2024-12-05 19:49:08.428550] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:30:49.529 [2024-12-05 19:49:08.429635] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81552 ] 00:30:49.787 [2024-12-05 19:49:08.584569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.787 [2024-12-05 19:49:08.665918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.160  [2024-12-05T19:49:10.733Z] Copying: 697/1024 [MB] (697 MBps) [2024-12-05T19:49:12.159Z] Copying: 1024/1024 [MB] (average 684 MBps) 00:30:53.153 00:30:53.153 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:53.153 19:49:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:55.093 19:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:55.093 19:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=86f2fbbd4ef728a89cc3846547d01c13 00:30:55.093 19:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:55.093 19:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:55.093 19:49:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:55.093 [2024-12-05 19:49:14.081910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.093 [2024-12-05 19:49:14.081956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:55.093 [2024-12-05 19:49:14.081968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:55.093 [2024-12-05 19:49:14.081974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.093 [2024-12-05 19:49:14.081994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.093 [2024-12-05 19:49:14.082003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:55.093 [2024-12-05 19:49:14.082010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:55.093 [2024-12-05 19:49:14.082016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.093 [2024-12-05 19:49:14.082032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.093 [2024-12-05 19:49:14.082039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:55.093 [2024-12-05 19:49:14.082045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:55.093 [2024-12-05 19:49:14.082050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.093 [2024-12-05 19:49:14.082100] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.180 ms, result 0 00:30:55.093 true 00:30:55.351 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:55.351 { 00:30:55.351 "name": "ftl", 00:30:55.351 "properties": [ 00:30:55.351 { 00:30:55.351 "name": "superblock_version", 00:30:55.351 "value": 5, 00:30:55.351 "read-only": true 00:30:55.351 }, 00:30:55.351 { 00:30:55.351 "name": "base_device", 00:30:55.351 "bands": [ 00:30:55.351 { 00:30:55.351 "id": 0, 00:30:55.351 "state": "FREE", 00:30:55.351 "validity": 0.0 00:30:55.351 }, 00:30:55.351 { 00:30:55.351 "id": 1, 00:30:55.351 "state": "FREE", 00:30:55.351 "validity": 0.0 00:30:55.351 }, 00:30:55.351 { 00:30:55.351 "id": 2, 00:30:55.351 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 3, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 4, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 5, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 6, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 7, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 8, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 9, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 10, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 11, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 12, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 13, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 14, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 15, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 16, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 17, 00:30:55.352 "state": "FREE", 00:30:55.352 "validity": 0.0 00:30:55.352 } 00:30:55.352 ], 00:30:55.352 "read-only": true 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "name": "cache_device", 00:30:55.352 "type": "bdev", 00:30:55.352 "chunks": [ 00:30:55.352 { 00:30:55.352 "id": 0, 00:30:55.352 "state": "INACTIVE", 00:30:55.352 "utilization": 0.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 1, 00:30:55.352 "state": "CLOSED", 00:30:55.352 "utilization": 1.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 2, 00:30:55.352 "state": "CLOSED", 00:30:55.352 "utilization": 1.0 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 3, 00:30:55.352 "state": "OPEN", 00:30:55.352 "utilization": 0.001953125 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "id": 4, 00:30:55.352 "state": "OPEN", 00:30:55.352 "utilization": 0.0 00:30:55.352 } 00:30:55.352 ], 00:30:55.352 "read-only": true 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "name": "verbose_mode", 00:30:55.352 "value": true, 00:30:55.352 "unit": "", 00:30:55.352 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:55.352 }, 00:30:55.352 { 00:30:55.352 "name": "prep_upgrade_on_shutdown", 00:30:55.352 "value": false, 00:30:55.352 "unit": "", 00:30:55.352 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:55.352 } 00:30:55.352 ] 00:30:55.352 } 00:30:55.352 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:55.610 [2024-12-05 19:49:14.514297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.610 [2024-12-05 19:49:14.514341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:55.610 [2024-12-05 19:49:14.514350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:55.610 [2024-12-05 19:49:14.514356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.610 [2024-12-05 19:49:14.514374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.610 [2024-12-05 19:49:14.514381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:55.610 [2024-12-05 19:49:14.514387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:55.610 [2024-12-05 19:49:14.514393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.610 [2024-12-05 19:49:14.514407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.610 [2024-12-05 19:49:14.514413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:55.610 [2024-12-05 19:49:14.514419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:55.610 [2024-12-05 19:49:14.514424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.610 [2024-12-05 19:49:14.514469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.164 ms, result 0 00:30:55.610 true 00:30:55.610 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:55.610 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:55.610 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:55.869 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:55.869 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:55.869 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:56.128 [2024-12-05 19:49:14.918642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.128 [2024-12-05 19:49:14.918681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:56.128 [2024-12-05 19:49:14.918691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:56.128 [2024-12-05 19:49:14.918697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.128 [2024-12-05 19:49:14.918714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.128 [2024-12-05 19:49:14.918721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:56.128 [2024-12-05 19:49:14.918728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:56.128 [2024-12-05 19:49:14.918733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.128 [2024-12-05 19:49:14.918747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.128 [2024-12-05 19:49:14.918753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:56.128 [2024-12-05 19:49:14.918759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:56.128 [2024-12-05 19:49:14.918764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.128 [2024-12-05 19:49:14.918808] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:30:56.128 true 00:30:56.128 19:49:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:56.128 { 00:30:56.128 "name": "ftl", 00:30:56.128 "properties": [ 00:30:56.128 { 00:30:56.128 "name": "superblock_version", 00:30:56.128 "value": 5, 00:30:56.128 "read-only": true 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "name": "base_device", 00:30:56.128 "bands": [ 00:30:56.128 { 00:30:56.128 "id": 0, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 1, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 2, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 3, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 4, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 5, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 6, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 7, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 8, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 9, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 10, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 11, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 12, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 13, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 14, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 15, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 16, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 17, 00:30:56.128 "state": "FREE", 00:30:56.128 "validity": 0.0 00:30:56.128 } 00:30:56.128 ], 00:30:56.128 "read-only": true 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "name": "cache_device", 00:30:56.128 "type": "bdev", 00:30:56.128 "chunks": [ 00:30:56.128 { 00:30:56.128 "id": 0, 00:30:56.128 "state": "INACTIVE", 00:30:56.128 "utilization": 0.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 1, 00:30:56.128 "state": "CLOSED", 00:30:56.128 "utilization": 1.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 2, 00:30:56.128 "state": "CLOSED", 00:30:56.128 "utilization": 1.0 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 3, 00:30:56.128 "state": "OPEN", 00:30:56.128 "utilization": 0.001953125 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "id": 4, 00:30:56.128 "state": "OPEN", 00:30:56.128 "utilization": 0.0 00:30:56.128 } 00:30:56.128 ], 00:30:56.128 "read-only": true 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "name": "verbose_mode", 00:30:56.128 "value": true, 00:30:56.128 "unit": "", 00:30:56.128 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:56.128 }, 00:30:56.128 { 00:30:56.128 "name": "prep_upgrade_on_shutdown", 00:30:56.128 "value": true, 00:30:56.128 "unit": "", 00:30:56.128 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:56.128 } 00:30:56.128 ] 00:30:56.128 } 00:30:56.384 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:56.384 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81195 ]] 00:30:56.384 19:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81195 00:30:56.384 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81195 ']' 00:30:56.384 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 81195 00:30:56.384 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:56.384 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.384 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81195 00:30:56.384 killing process with pid 81195 00:30:56.384 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:56.385 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:56.385 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81195' 00:30:56.385 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 81195 00:30:56.385 19:49:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 81195 00:30:56.948 [2024-12-05 19:49:15.697773] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:56.948 [2024-12-05 19:49:15.709438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.948 [2024-12-05 19:49:15.709477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:56.948 [2024-12-05 19:49:15.709487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:56.948 [2024-12-05 19:49:15.709493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:56.948 [2024-12-05 19:49:15.709511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:56.948 [2024-12-05 19:49:15.711597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:56.948 [2024-12-05 19:49:15.711624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:56.948 [2024-12-05 19:49:15.711632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.076 ms 00:30:56.948 [2024-12-05 19:49:15.711643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.049 [2024-12-05 19:49:23.900089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.049 [2024-12-05 19:49:23.900154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:05.049 [2024-12-05 19:49:23.900172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8188.397 ms 00:31:05.049 [2024-12-05 19:49:23.900180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.049 [2024-12-05 19:49:23.901764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.049 [2024-12-05 19:49:23.901779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:05.049 [2024-12-05 19:49:23.901788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.568 ms 00:31:05.049 [2024-12-05 19:49:23.901803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.049 [2024-12-05 19:49:23.903248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.049 [2024-12-05 19:49:23.903271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:05.049 [2024-12-05 19:49:23.903280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.416 ms 00:31:05.049 [2024-12-05 19:49:23.903293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.049 [2024-12-05 19:49:23.912827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.049 [2024-12-05 19:49:23.912859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:05.049 [2024-12-05 19:49:23.912868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.505 ms 00:31:05.049 [2024-12-05 19:49:23.912876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.049 [2024-12-05 19:49:23.919459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.049 [2024-12-05 19:49:23.919493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:05.049 [2024-12-05 19:49:23.919503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.553 ms 00:31:05.049 [2024-12-05 19:49:23.919511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.049 [2024-12-05 19:49:23.919588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.049 [2024-12-05 19:49:23.919607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:05.050 [2024-12-05 19:49:23.919616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.047 ms 00:31:05.050 [2024-12-05 19:49:23.919624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:23.928588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.050 [2024-12-05 19:49:23.928620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:05.050 [2024-12-05 19:49:23.928629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.949 ms 00:31:05.050 [2024-12-05 19:49:23.928636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:23.937999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.050 [2024-12-05 19:49:23.938029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:05.050 [2024-12-05 19:49:23.938038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.334 ms 00:31:05.050 [2024-12-05 19:49:23.938045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:23.946726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.050 [2024-12-05 19:49:23.946756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:05.050 [2024-12-05 19:49:23.946765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.652 ms 00:31:05.050 [2024-12-05 19:49:23.946772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:23.956032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.050 [2024-12-05 19:49:23.956064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:05.050 [2024-12-05 19:49:23.956073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.204 ms 00:31:05.050 [2024-12-05 19:49:23.956080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:23.956107] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:05.050 [2024-12-05 19:49:23.956136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:05.050 [2024-12-05 19:49:23.956147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:05.050 [2024-12-05 19:49:23.956155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:05.050 [2024-12-05 19:49:23.956163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:05.050 [2024-12-05 19:49:23.956274] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:05.050 [2024-12-05 19:49:23.956281] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 30d6ecf8-6f20-4111-8db2-0a3086068cce 00:31:05.050 [2024-12-05 19:49:23.956289] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:05.050 [2024-12-05 19:49:23.956296] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:05.050 [2024-12-05 19:49:23.956303] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:05.050 [2024-12-05 19:49:23.956311] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:05.050 [2024-12-05 19:49:23.956321] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:05.050 [2024-12-05 19:49:23.956328] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:05.050 [2024-12-05 19:49:23.956338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:05.050 [2024-12-05 19:49:23.956344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:05.050 [2024-12-05 19:49:23.956351] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:05.050 [2024-12-05 19:49:23.956357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.050 [2024-12-05 19:49:23.956365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:05.050 [2024-12-05 19:49:23.956374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.251 ms 00:31:05.050 [2024-12-05 19:49:23.956381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:23.968851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.050 [2024-12-05 19:49:23.968882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:05.050 [2024-12-05 19:49:23.968896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.455 ms 00:31:05.050 [2024-12-05 19:49:23.968904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:23.969261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:05.050 [2024-12-05 19:49:23.969279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:05.050 [2024-12-05 19:49:23.969288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:31:05.050 [2024-12-05 19:49:23.969296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:24.010309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.050 [2024-12-05 19:49:24.010348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:05.050 [2024-12-05 19:49:24.010359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.050 [2024-12-05 19:49:24.010367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:24.010400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.050 [2024-12-05 19:49:24.010409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:05.050 [2024-12-05 19:49:24.010420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.050 [2024-12-05 19:49:24.010428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:24.010491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.050 [2024-12-05 19:49:24.010501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:05.050 [2024-12-05 19:49:24.010512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.050 [2024-12-05 19:49:24.010521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.050 [2024-12-05 19:49:24.010537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.050 [2024-12-05 19:49:24.010544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:05.050 [2024-12-05 19:49:24.010552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.050 [2024-12-05 19:49:24.010559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.308 [2024-12-05 19:49:24.087578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.308 [2024-12-05 19:49:24.087619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:05.308 [2024-12-05 19:49:24.087635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.308 [2024-12-05 19:49:24.087642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.308 [2024-12-05 19:49:24.149489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.308 [2024-12-05 19:49:24.149537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:05.308 [2024-12-05 19:49:24.149548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.308 [2024-12-05 19:49:24.149555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.308 [2024-12-05 19:49:24.149638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.308 [2024-12-05 19:49:24.149648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:05.308 [2024-12-05 19:49:24.149656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.308 [2024-12-05 19:49:24.149667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.308 [2024-12-05 19:49:24.149707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.308 [2024-12-05 19:49:24.149716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:05.308 [2024-12-05 19:49:24.149724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.308 [2024-12-05 19:49:24.149731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.308 [2024-12-05 19:49:24.149828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.308 [2024-12-05 19:49:24.149839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:05.308 [2024-12-05 19:49:24.149847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.308 [2024-12-05 19:49:24.149855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.308 [2024-12-05 19:49:24.149886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.308 [2024-12-05 19:49:24.149895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:05.308 [2024-12-05 19:49:24.149902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.308 [2024-12-05 19:49:24.149910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.308 [2024-12-05 19:49:24.149945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.308 [2024-12-05 19:49:24.149953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:05.308 [2024-12-05 19:49:24.149961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.308 [2024-12-05 19:49:24.149968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.308 [2024-12-05 19:49:24.150009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:05.308 [2024-12-05 19:49:24.150019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:05.308 [2024-12-05 19:49:24.150027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:05.308 [2024-12-05 19:49:24.150034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:05.308 [2024-12-05 19:49:24.150162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8440.653 ms, result 0 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81751 00:31:13.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81751 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81751 ']' 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.433 19:49:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:13.433 [2024-12-05 19:49:31.138403] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:31:13.433 [2024-12-05 19:49:31.138713] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81751 ] 00:31:13.433 [2024-12-05 19:49:31.297307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:13.433 [2024-12-05 19:49:31.397873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:13.433 [2024-12-05 19:49:32.086322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:13.433 [2024-12-05 19:49:32.086391] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:13.433 [2024-12-05 19:49:32.234990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.235042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:13.433 [2024-12-05 19:49:32.235056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:13.433 [2024-12-05 19:49:32.235064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.235121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.235145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:13.433 [2024-12-05 19:49:32.235154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:31:13.433 [2024-12-05 19:49:32.235161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.235187] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:13.433 [2024-12-05 19:49:32.235937] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:13.433 [2024-12-05 19:49:32.235962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.235970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:13.433 [2024-12-05 19:49:32.235979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.783 ms 00:31:13.433 [2024-12-05 19:49:32.235986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.237119] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:13.433 [2024-12-05 19:49:32.249672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.249709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:13.433 [2024-12-05 19:49:32.249727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.554 ms 00:31:13.433 [2024-12-05 19:49:32.249735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.249792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.249809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:13.433 [2024-12-05 19:49:32.249818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:31:13.433 [2024-12-05 19:49:32.249825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.254723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.254754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:13.433 [2024-12-05 19:49:32.254765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.845 ms 00:31:13.433 [2024-12-05 19:49:32.254773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.254830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.254839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:13.433 [2024-12-05 19:49:32.254847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:31:13.433 [2024-12-05 19:49:32.254855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.254911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.254924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:13.433 [2024-12-05 19:49:32.254932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:13.433 [2024-12-05 19:49:32.254940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.254963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:13.433 [2024-12-05 19:49:32.258193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.258221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:13.433 [2024-12-05 19:49:32.258230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.235 ms 00:31:13.433 [2024-12-05 19:49:32.258240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.258265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.433 [2024-12-05 19:49:32.258273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:13.433 [2024-12-05 19:49:32.258281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:13.433 [2024-12-05 19:49:32.258288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.433 [2024-12-05 19:49:32.258308] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:13.433 [2024-12-05 19:49:32.258329] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:13.433 [2024-12-05 19:49:32.258363] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:13.433 [2024-12-05 19:49:32.258379] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:13.434 [2024-12-05 19:49:32.258482] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:13.434 [2024-12-05 19:49:32.258492] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:13.434 [2024-12-05 19:49:32.258502] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:13.434 [2024-12-05 19:49:32.258512] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:13.434 [2024-12-05 19:49:32.258521] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:13.434 [2024-12-05 19:49:32.258531] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:13.434 [2024-12-05 19:49:32.258538] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:13.434 [2024-12-05 19:49:32.258545] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:13.434 [2024-12-05 19:49:32.258552] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:13.434 [2024-12-05 19:49:32.258559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.434 [2024-12-05 19:49:32.258567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:13.434 [2024-12-05 19:49:32.258575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.253 ms 00:31:13.434 [2024-12-05 19:49:32.258581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.434 [2024-12-05 19:49:32.258666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.434 [2024-12-05 19:49:32.258674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:13.434 [2024-12-05 19:49:32.258684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:31:13.434 [2024-12-05 19:49:32.258691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.434 [2024-12-05 19:49:32.258792] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:13.434 [2024-12-05 19:49:32.258802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:13.434 [2024-12-05 19:49:32.258810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:13.434 [2024-12-05 19:49:32.258817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.258825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:13.434 [2024-12-05 19:49:32.258832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.258840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:13.434 [2024-12-05 19:49:32.258847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:13.434 [2024-12-05 19:49:32.258854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:13.434 [2024-12-05 19:49:32.258860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.258867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:13.434 [2024-12-05 19:49:32.258873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:13.434 [2024-12-05 19:49:32.258880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.258888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:13.434 [2024-12-05 19:49:32.258895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:13.434 [2024-12-05 19:49:32.258902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.258909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:13.434 [2024-12-05 19:49:32.258915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:13.434 [2024-12-05 19:49:32.258922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.258930] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:13.434 [2024-12-05 19:49:32.258937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:13.434 [2024-12-05 19:49:32.258944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:13.434 [2024-12-05 19:49:32.258950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:13.434 [2024-12-05 19:49:32.258963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:13.434 [2024-12-05 19:49:32.258970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:13.434 [2024-12-05 19:49:32.258977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:13.434 [2024-12-05 19:49:32.258984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:13.434 [2024-12-05 19:49:32.258990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:13.434 [2024-12-05 19:49:32.258997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:13.434 [2024-12-05 19:49:32.259003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:13.434 [2024-12-05 19:49:32.259009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:13.434 [2024-12-05 19:49:32.259016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:13.434 [2024-12-05 19:49:32.259022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:13.434 [2024-12-05 19:49:32.259028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.259035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:13.434 [2024-12-05 19:49:32.259041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:13.434 [2024-12-05 19:49:32.259047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.259054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:13.434 [2024-12-05 19:49:32.259060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:13.434 [2024-12-05 19:49:32.259067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.259073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:13.434 [2024-12-05 19:49:32.259079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:13.434 [2024-12-05 19:49:32.259085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.259092] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:13.434 [2024-12-05 19:49:32.259099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:13.434 [2024-12-05 19:49:32.259111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:13.434 [2024-12-05 19:49:32.259118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:13.434 [2024-12-05 19:49:32.259139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:13.434 [2024-12-05 19:49:32.259158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:13.434 [2024-12-05 19:49:32.259165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:13.434 [2024-12-05 19:49:32.259172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:13.434 [2024-12-05 19:49:32.259179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:13.434 [2024-12-05 19:49:32.259186] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:13.434 [2024-12-05 19:49:32.259194] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:13.434 [2024-12-05 19:49:32.259203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:13.434 [2024-12-05 19:49:32.259219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259233] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:13.434 [2024-12-05 19:49:32.259241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:13.434 [2024-12-05 19:49:32.259248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:13.434 [2024-12-05 19:49:32.259255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:13.434 [2024-12-05 19:49:32.259262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:13.434 [2024-12-05 19:49:32.259312] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:13.434 [2024-12-05 19:49:32.259321] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:13.434 [2024-12-05 19:49:32.259342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:13.434 [2024-12-05 19:49:32.259353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:13.434 [2024-12-05 19:49:32.259360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:13.434 [2024-12-05 19:49:32.259373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:13.434 [2024-12-05 19:49:32.259384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:13.434 [2024-12-05 19:49:32.259399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:31:13.434 [2024-12-05 19:49:32.259406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:13.434 [2024-12-05 19:49:32.259476] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:13.434 [2024-12-05 19:49:32.259487] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:15.961 [2024-12-05 19:49:34.648587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.648641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:15.961 [2024-12-05 19:49:34.648656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2389.102 ms 00:31:15.961 [2024-12-05 19:49:34.648664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.674031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.674081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:15.961 [2024-12-05 19:49:34.674094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.848 ms 00:31:15.961 [2024-12-05 19:49:34.674103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.674215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.674232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:15.961 [2024-12-05 19:49:34.674241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:15.961 [2024-12-05 19:49:34.674249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.704125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.704171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:15.961 [2024-12-05 19:49:34.704186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 29.837 ms 00:31:15.961 [2024-12-05 19:49:34.704194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.704227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.704235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:15.961 [2024-12-05 19:49:34.704243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:15.961 [2024-12-05 19:49:34.704250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.704586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.704609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:15.961 [2024-12-05 19:49:34.704619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.282 ms 00:31:15.961 [2024-12-05 19:49:34.704626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.704669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.704682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:15.961 [2024-12-05 19:49:34.704690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:31:15.961 [2024-12-05 19:49:34.704697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.718453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.718489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:15.961 [2024-12-05 19:49:34.718499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.734 ms 00:31:15.961 [2024-12-05 19:49:34.718507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.742590] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:15.961 [2024-12-05 19:49:34.742640] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:15.961 [2024-12-05 19:49:34.742654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.742664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:15.961 [2024-12-05 19:49:34.742675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.048 ms 00:31:15.961 [2024-12-05 19:49:34.742683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.756733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.756768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:15.961 [2024-12-05 19:49:34.756780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.003 ms 00:31:15.961 [2024-12-05 19:49:34.756789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.767622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.767655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:15.961 [2024-12-05 19:49:34.767665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.795 ms 00:31:15.961 [2024-12-05 19:49:34.767673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.961 [2024-12-05 19:49:34.778457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.961 [2024-12-05 19:49:34.778489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:15.961 [2024-12-05 19:49:34.778499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.752 ms 00:31:15.961 [2024-12-05 19:49:34.778507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.779113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.962 [2024-12-05 19:49:34.779148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:15.962 [2024-12-05 19:49:34.779157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.521 ms 00:31:15.962 [2024-12-05 19:49:34.779165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.833028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.962 [2024-12-05 19:49:34.833083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:15.962 [2024-12-05 19:49:34.833096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.843 ms 00:31:15.962 [2024-12-05 19:49:34.833104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.843739] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:15.962 [2024-12-05 19:49:34.844547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.962 [2024-12-05 19:49:34.844577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:15.962 [2024-12-05 19:49:34.844589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.373 ms 00:31:15.962 [2024-12-05 19:49:34.844597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.844701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.962 [2024-12-05 19:49:34.844714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:15.962 [2024-12-05 19:49:34.844723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:15.962 [2024-12-05 19:49:34.844730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.844784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.962 [2024-12-05 19:49:34.844794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:15.962 [2024-12-05 19:49:34.844802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:15.962 [2024-12-05 19:49:34.844809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.844828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.962 [2024-12-05 19:49:34.844837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:15.962 [2024-12-05 19:49:34.844847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:15.962 [2024-12-05 19:49:34.844854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.844885] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:15.962 [2024-12-05 19:49:34.844894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.962 [2024-12-05 19:49:34.844901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:15.962 [2024-12-05 19:49:34.844909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:15.962 [2024-12-05 19:49:34.844917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.867922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.962 [2024-12-05 19:49:34.867968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:15.962 [2024-12-05 19:49:34.867981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.986 ms 00:31:15.962 [2024-12-05 19:49:34.867989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.868058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.962 [2024-12-05 19:49:34.868067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:15.962 [2024-12-05 19:49:34.868075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:15.962 [2024-12-05 19:49:34.868082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.962 [2024-12-05 19:49:34.869058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2633.661 ms, result 0 00:31:15.962 [2024-12-05 19:49:34.884288] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:15.962 [2024-12-05 19:49:34.900284] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:15.962 [2024-12-05 19:49:34.908399] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:16.526 19:49:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.526 19:49:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:16.526 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:16.526 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:16.526 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:16.526 [2024-12-05 19:49:35.528967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.526 [2024-12-05 19:49:35.529020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:16.526 [2024-12-05 19:49:35.529036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:16.526 [2024-12-05 19:49:35.529044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.526 [2024-12-05 19:49:35.529068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.526 [2024-12-05 19:49:35.529077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:16.526 [2024-12-05 19:49:35.529085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:16.526 [2024-12-05 19:49:35.529092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.526 [2024-12-05 19:49:35.529112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:16.526 [2024-12-05 19:49:35.529120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:16.526 [2024-12-05 19:49:35.529140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:16.526 [2024-12-05 19:49:35.529147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:16.526 [2024-12-05 19:49:35.529206] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.230 ms, result 0 00:31:16.783 true 00:31:16.783 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:16.783 { 00:31:16.783 "name": "ftl", 00:31:16.783 "properties": [ 00:31:16.783 { 00:31:16.783 "name": "superblock_version", 00:31:16.783 "value": 5, 00:31:16.783 "read-only": true 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "name": "base_device", 00:31:16.783 "bands": [ 00:31:16.783 { 00:31:16.783 "id": 0, 00:31:16.783 "state": "CLOSED", 00:31:16.783 "validity": 1.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 1, 00:31:16.783 "state": "CLOSED", 00:31:16.783 "validity": 1.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 2, 00:31:16.783 "state": "CLOSED", 00:31:16.783 "validity": 0.007843137254901933 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 3, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 4, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 5, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 6, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 7, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 8, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 9, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 10, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 11, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 12, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 13, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 14, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 15, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 16, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "id": 17, 00:31:16.783 "state": "FREE", 00:31:16.783 "validity": 0.0 00:31:16.783 } 00:31:16.783 ], 00:31:16.783 "read-only": true 00:31:16.783 }, 00:31:16.783 { 00:31:16.783 "name": "cache_device", 00:31:16.783 "type": "bdev", 00:31:16.783 "chunks": [ 00:31:16.783 { 00:31:16.783 "id": 0, 00:31:16.783 "state": "INACTIVE", 00:31:16.783 "utilization": 0.0 00:31:16.783 }, 00:31:16.783 { 00:31:16.784 "id": 1, 00:31:16.784 "state": "OPEN", 00:31:16.784 "utilization": 0.0 00:31:16.784 }, 00:31:16.784 { 00:31:16.784 "id": 2, 00:31:16.784 "state": "OPEN", 00:31:16.784 "utilization": 0.0 00:31:16.784 }, 00:31:16.784 { 00:31:16.784 "id": 3, 00:31:16.784 "state": "FREE", 00:31:16.784 "utilization": 0.0 00:31:16.784 }, 00:31:16.784 { 00:31:16.784 "id": 4, 00:31:16.784 "state": "FREE", 00:31:16.784 "utilization": 0.0 00:31:16.784 } 00:31:16.784 ], 00:31:16.784 "read-only": true 00:31:16.784 }, 00:31:16.784 { 00:31:16.784 "name": "verbose_mode", 00:31:16.784 "value": true, 00:31:16.784 "unit": "", 00:31:16.784 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:16.784 }, 00:31:16.784 { 00:31:16.784 "name": "prep_upgrade_on_shutdown", 00:31:16.784 "value": false, 00:31:16.784 "unit": "", 00:31:16.784 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:16.784 } 00:31:16.784 ] 00:31:16.784 } 00:31:16.784 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:16.784 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:16.784 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:17.041 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:17.041 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:17.041 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:17.041 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:17.041 19:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:17.298 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:17.298 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:17.298 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:17.298 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:17.299 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:17.299 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:17.299 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:17.299 Validate MD5 checksum, iteration 1 00:31:17.299 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:17.299 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:17.299 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:17.299 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:17.299 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:17.299 19:49:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:17.299 [2024-12-05 19:49:36.221484] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:31:17.299 [2024-12-05 19:49:36.221599] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81821 ] 00:31:17.556 [2024-12-05 19:49:36.380294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.556 [2024-12-05 19:49:36.476042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:19.472  [2024-12-05T19:49:38.736Z] Copying: 656/1024 [MB] (656 MBps) [2024-12-05T19:49:45.288Z] Copying: 1024/1024 [MB] (average 617 MBps) 00:31:26.282 00:31:26.282 19:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:26.282 19:49:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:28.808 Validate MD5 checksum, iteration 2 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=183c759ebe648d53907ae5842fd36daf 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 183c759ebe648d53907ae5842fd36daf != \1\8\3\c\7\5\9\e\b\e\6\4\8\d\5\3\9\0\7\a\e\5\8\4\2\f\d\3\6\d\a\f ]] 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:28.808 19:49:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:28.808 [2024-12-05 19:49:47.357114] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:31:28.808 [2024-12-05 19:49:47.357242] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81944 ] 00:31:28.808 [2024-12-05 19:49:47.516625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:28.808 [2024-12-05 19:49:47.615980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.182  [2024-12-05T19:49:49.753Z] Copying: 655/1024 [MB] (655 MBps) [2024-12-05T19:49:51.177Z] Copying: 1024/1024 [MB] (average 637 MBps) 00:31:32.171 00:31:32.171 19:49:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:32.171 19:49:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=86f2fbbd4ef728a89cc3846547d01c13 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 86f2fbbd4ef728a89cc3846547d01c13 != \8\6\f\2\f\b\b\d\4\e\f\7\2\8\a\8\9\c\c\3\8\4\6\5\4\7\d\0\1\c\1\3 ]] 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81751 ]] 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81751 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=82011 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 82011 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 82011 ']' 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:34.078 19:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:34.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:34.079 19:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:34.079 19:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:34.079 19:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:34.079 [2024-12-05 19:49:52.998405] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:31:34.079 [2024-12-05 19:49:52.998522] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82011 ] 00:31:34.336 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 81751 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:31:34.336 [2024-12-05 19:49:53.154877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:34.336 [2024-12-05 19:49:53.230904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:34.900 [2024-12-05 19:49:53.804430] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:34.900 [2024-12-05 19:49:53.804480] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:35.159 [2024-12-05 19:49:53.947465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.159 [2024-12-05 19:49:53.947502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:35.159 [2024-12-05 19:49:53.947513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:35.159 [2024-12-05 19:49:53.947520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.159 [2024-12-05 19:49:53.947562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.159 [2024-12-05 19:49:53.947570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:35.159 [2024-12-05 19:49:53.947577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:31:35.159 [2024-12-05 19:49:53.947583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.159 [2024-12-05 19:49:53.947601] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:35.159 [2024-12-05 19:49:53.948180] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:35.159 [2024-12-05 19:49:53.948198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.159 [2024-12-05 19:49:53.948204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:35.159 [2024-12-05 19:49:53.948210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.603 ms 00:31:35.159 [2024-12-05 19:49:53.948216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.159 [2024-12-05 19:49:53.948454] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:35.159 [2024-12-05 19:49:53.960750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.159 [2024-12-05 19:49:53.960781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:35.159 [2024-12-05 19:49:53.960791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.297 ms 00:31:35.159 [2024-12-05 19:49:53.960798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.159 [2024-12-05 19:49:53.967445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.159 [2024-12-05 19:49:53.967472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:35.159 [2024-12-05 19:49:53.967480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:35.159 [2024-12-05 19:49:53.967486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.159 [2024-12-05 19:49:53.967729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.159 [2024-12-05 19:49:53.967742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:35.159 [2024-12-05 19:49:53.967749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:31:35.160 [2024-12-05 19:49:53.967755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.160 [2024-12-05 19:49:53.967794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.160 [2024-12-05 19:49:53.967801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:35.160 [2024-12-05 19:49:53.967807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:35.160 [2024-12-05 19:49:53.967812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.160 [2024-12-05 19:49:53.967830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.160 [2024-12-05 19:49:53.967836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:35.160 [2024-12-05 19:49:53.967843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:35.160 [2024-12-05 19:49:53.967848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.160 [2024-12-05 19:49:53.967863] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:35.160 [2024-12-05 19:49:53.970110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.160 [2024-12-05 19:49:53.970140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:35.160 [2024-12-05 19:49:53.970148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.250 ms 00:31:35.160 [2024-12-05 19:49:53.970154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.160 [2024-12-05 19:49:53.970176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.160 [2024-12-05 19:49:53.970183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:35.160 [2024-12-05 19:49:53.970189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:35.160 [2024-12-05 19:49:53.970195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.160 [2024-12-05 19:49:53.970211] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:35.160 [2024-12-05 19:49:53.970226] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:35.160 [2024-12-05 19:49:53.970253] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:35.160 [2024-12-05 19:49:53.970265] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:35.160 [2024-12-05 19:49:53.970344] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:35.160 [2024-12-05 19:49:53.970352] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:35.160 [2024-12-05 19:49:53.970360] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:35.160 [2024-12-05 19:49:53.970367] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970374] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970380] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:35.160 [2024-12-05 19:49:53.970386] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:35.160 [2024-12-05 19:49:53.970391] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:35.160 [2024-12-05 19:49:53.970397] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:35.160 [2024-12-05 19:49:53.970405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.160 [2024-12-05 19:49:53.970410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:35.160 [2024-12-05 19:49:53.970416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.195 ms 00:31:35.160 [2024-12-05 19:49:53.970421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.160 [2024-12-05 19:49:53.970486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.160 [2024-12-05 19:49:53.970496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:35.160 [2024-12-05 19:49:53.970502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:31:35.160 [2024-12-05 19:49:53.970507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.160 [2024-12-05 19:49:53.970584] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:35.160 [2024-12-05 19:49:53.970594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:35.160 [2024-12-05 19:49:53.970600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:35.160 [2024-12-05 19:49:53.970618] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:35.160 [2024-12-05 19:49:53.970632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:35.160 [2024-12-05 19:49:53.970637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:35.160 [2024-12-05 19:49:53.970642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:35.160 [2024-12-05 19:49:53.970653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:35.160 [2024-12-05 19:49:53.970657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:35.160 [2024-12-05 19:49:53.970668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:35.160 [2024-12-05 19:49:53.970673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:35.160 [2024-12-05 19:49:53.970682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:35.160 [2024-12-05 19:49:53.970688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:35.160 [2024-12-05 19:49:53.970698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:35.160 [2024-12-05 19:49:53.970708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:35.160 [2024-12-05 19:49:53.970718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:35.160 [2024-12-05 19:49:53.970723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970729] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:35.160 [2024-12-05 19:49:53.970734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:35.160 [2024-12-05 19:49:53.970739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:35.160 [2024-12-05 19:49:53.970748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:35.160 [2024-12-05 19:49:53.970753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:35.160 [2024-12-05 19:49:53.970763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:35.160 [2024-12-05 19:49:53.970768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:35.160 [2024-12-05 19:49:53.970778] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:35.160 [2024-12-05 19:49:53.970794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970804] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:35.160 [2024-12-05 19:49:53.970809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:35.160 [2024-12-05 19:49:53.970815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970820] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:35.160 [2024-12-05 19:49:53.970825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:35.160 [2024-12-05 19:49:53.970831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:35.160 [2024-12-05 19:49:53.970842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:35.160 [2024-12-05 19:49:53.970847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:35.160 [2024-12-05 19:49:53.970852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:35.160 [2024-12-05 19:49:53.970857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:35.160 [2024-12-05 19:49:53.970862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:35.160 [2024-12-05 19:49:53.970867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:35.160 [2024-12-05 19:49:53.970872] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:35.160 [2024-12-05 19:49:53.970879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:35.160 [2024-12-05 19:49:53.970886] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:35.160 [2024-12-05 19:49:53.970891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:35.160 [2024-12-05 19:49:53.970897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:35.160 [2024-12-05 19:49:53.970902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:35.160 [2024-12-05 19:49:53.970907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:35.160 [2024-12-05 19:49:53.970913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:35.160 [2024-12-05 19:49:53.970918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:35.160 [2024-12-05 19:49:53.970923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:35.160 [2024-12-05 19:49:53.970929] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:35.161 [2024-12-05 19:49:53.970934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:35.161 [2024-12-05 19:49:53.970940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:35.161 [2024-12-05 19:49:53.970946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:35.161 [2024-12-05 19:49:53.970951] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:35.161 [2024-12-05 19:49:53.970957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:35.161 [2024-12-05 19:49:53.970962] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:35.161 [2024-12-05 19:49:53.970971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:35.161 [2024-12-05 19:49:53.970979] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:35.161 [2024-12-05 19:49:53.970984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:35.161 [2024-12-05 19:49:53.970990] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:35.161 [2024-12-05 19:49:53.970995] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:35.161 [2024-12-05 19:49:53.971001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:53.971007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:35.161 [2024-12-05 19:49:53.971012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.470 ms 00:31:35.161 [2024-12-05 19:49:53.971017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:53.990024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:53.990048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:35.161 [2024-12-05 19:49:53.990056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.970 ms 00:31:35.161 [2024-12-05 19:49:53.990062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:53.990091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:53.990097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:35.161 [2024-12-05 19:49:53.990103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:35.161 [2024-12-05 19:49:53.990109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.013948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.013974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:35.161 [2024-12-05 19:49:54.013983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.790 ms 00:31:35.161 [2024-12-05 19:49:54.013989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.014011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.014017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:35.161 [2024-12-05 19:49:54.014024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:35.161 [2024-12-05 19:49:54.014032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.014103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.014111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:35.161 [2024-12-05 19:49:54.014118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:31:35.161 [2024-12-05 19:49:54.014123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.014164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.014171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:35.161 [2024-12-05 19:49:54.014177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:35.161 [2024-12-05 19:49:54.014183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.025500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.025522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:35.161 [2024-12-05 19:49:54.025530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.298 ms 00:31:35.161 [2024-12-05 19:49:54.025536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.025611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.025619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:31:35.161 [2024-12-05 19:49:54.025625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:35.161 [2024-12-05 19:49:54.025631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.048953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.049068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:35.161 [2024-12-05 19:49:54.049086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.306 ms 00:31:35.161 [2024-12-05 19:49:54.049095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.057372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.057395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:35.161 [2024-12-05 19:49:54.057408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:31:35.161 [2024-12-05 19:49:54.057415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.100444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.100482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:35.161 [2024-12-05 19:49:54.100491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.986 ms 00:31:35.161 [2024-12-05 19:49:54.100497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.100598] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:35.161 [2024-12-05 19:49:54.100671] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:35.161 [2024-12-05 19:49:54.100740] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:35.161 [2024-12-05 19:49:54.100810] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:35.161 [2024-12-05 19:49:54.100817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.100823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:35.161 [2024-12-05 19:49:54.100830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.286 ms 00:31:35.161 [2024-12-05 19:49:54.100835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.100876] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:35.161 [2024-12-05 19:49:54.100885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.100893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:35.161 [2024-12-05 19:49:54.100900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:35.161 [2024-12-05 19:49:54.100906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.112103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.112139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:35.161 [2024-12-05 19:49:54.112148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.180 ms 00:31:35.161 [2024-12-05 19:49:54.112154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.118591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.118614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:35.161 [2024-12-05 19:49:54.118621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:35.161 [2024-12-05 19:49:54.118627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.161 [2024-12-05 19:49:54.118690] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:35.161 [2024-12-05 19:49:54.118795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.161 [2024-12-05 19:49:54.118803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:35.161 [2024-12-05 19:49:54.118810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.106 ms 00:31:35.161 [2024-12-05 19:49:54.118816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.747 [2024-12-05 19:49:54.533223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.747 [2024-12-05 19:49:54.533278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:35.747 [2024-12-05 19:49:54.533292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 413.768 ms 00:31:35.747 [2024-12-05 19:49:54.533300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.747 [2024-12-05 19:49:54.537068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.747 [2024-12-05 19:49:54.537099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:35.747 [2024-12-05 19:49:54.537109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.812 ms 00:31:35.747 [2024-12-05 19:49:54.537117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.747 [2024-12-05 19:49:54.537433] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:35.747 [2024-12-05 19:49:54.537460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.747 [2024-12-05 19:49:54.537468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:35.747 [2024-12-05 19:49:54.537477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.309 ms 00:31:35.747 [2024-12-05 19:49:54.537485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.747 [2024-12-05 19:49:54.537512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.747 [2024-12-05 19:49:54.537521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:35.747 [2024-12-05 19:49:54.537529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:35.747 [2024-12-05 19:49:54.537542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:35.747 [2024-12-05 19:49:54.537574] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 418.879 ms, result 0 00:31:35.747 [2024-12-05 19:49:54.537609] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:35.747 [2024-12-05 19:49:54.537698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:35.747 [2024-12-05 19:49:54.537708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:35.747 [2024-12-05 19:49:54.537716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.090 ms 00:31:35.747 [2024-12-05 19:49:54.537722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.006 [2024-12-05 19:49:54.957184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.006 [2024-12-05 19:49:54.957241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:36.006 [2024-12-05 19:49:54.957270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 418.514 ms 00:31:36.006 [2024-12-05 19:49:54.957280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.006 [2024-12-05 19:49:54.961078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.006 [2024-12-05 19:49:54.961108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:36.006 [2024-12-05 19:49:54.961117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:31:36.006 [2024-12-05 19:49:54.961136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.006 [2024-12-05 19:49:54.961482] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:36.006 [2024-12-05 19:49:54.961509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.006 [2024-12-05 19:49:54.961517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:36.006 [2024-12-05 19:49:54.961526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.347 ms 00:31:36.006 [2024-12-05 19:49:54.961533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.006 [2024-12-05 19:49:54.961596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.006 [2024-12-05 19:49:54.961605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:36.006 [2024-12-05 19:49:54.961613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:36.006 [2024-12-05 19:49:54.961620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.006 [2024-12-05 19:49:54.961654] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 424.039 ms, result 0 00:31:36.006 [2024-12-05 19:49:54.961693] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:36.006 [2024-12-05 19:49:54.961702] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:36.006 [2024-12-05 19:49:54.961712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.006 [2024-12-05 19:49:54.961720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:36.006 [2024-12-05 19:49:54.961728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 843.036 ms 00:31:36.006 [2024-12-05 19:49:54.961736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.006 [2024-12-05 19:49:54.961765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.006 [2024-12-05 19:49:54.961783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:36.006 [2024-12-05 19:49:54.961803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:36.006 [2024-12-05 19:49:54.961815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.006 [2024-12-05 19:49:54.972678] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:36.006 [2024-12-05 19:49:54.972775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.006 [2024-12-05 19:49:54.972784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:36.006 [2024-12-05 19:49:54.972794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.939 ms 00:31:36.006 [2024-12-05 19:49:54.972801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.006 [2024-12-05 19:49:54.973503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.006 [2024-12-05 19:49:54.973519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:36.006 [2024-12-05 19:49:54.973531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.628 ms 00:31:36.006 [2024-12-05 19:49:54.973538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.006 [2024-12-05 19:49:54.975794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.006 [2024-12-05 19:49:54.975816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:36.006 [2024-12-05 19:49:54.975826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.238 ms 00:31:36.007 [2024-12-05 19:49:54.975834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.007 [2024-12-05 19:49:54.975874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.007 [2024-12-05 19:49:54.975882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:36.007 [2024-12-05 19:49:54.975890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:36.007 [2024-12-05 19:49:54.975900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.007 [2024-12-05 19:49:54.975998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.007 [2024-12-05 19:49:54.976012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:36.007 [2024-12-05 19:49:54.976020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:31:36.007 [2024-12-05 19:49:54.976027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.007 [2024-12-05 19:49:54.976046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.007 [2024-12-05 19:49:54.976054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:36.007 [2024-12-05 19:49:54.976062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:36.007 [2024-12-05 19:49:54.976069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.007 [2024-12-05 19:49:54.976099] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:36.007 [2024-12-05 19:49:54.976108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.007 [2024-12-05 19:49:54.976115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:36.007 [2024-12-05 19:49:54.976122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:31:36.007 [2024-12-05 19:49:54.976139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.007 [2024-12-05 19:49:54.976188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:36.007 [2024-12-05 19:49:54.976196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:36.007 [2024-12-05 19:49:54.976204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:31:36.007 [2024-12-05 19:49:54.976211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:36.007 [2024-12-05 19:49:54.977058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1029.191 ms, result 0 00:31:36.007 [2024-12-05 19:49:54.989400] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:36.007 [2024-12-05 19:49:55.005394] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:36.265 [2024-12-05 19:49:55.013523] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:36.829 Validate MD5 checksum, iteration 1 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:36.829 19:49:55 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:36.829 [2024-12-05 19:49:55.583348] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:31:36.829 [2024-12-05 19:49:55.583456] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82040 ] 00:31:36.829 [2024-12-05 19:49:55.741410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.103 [2024-12-05 19:49:55.836422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:38.476  [2024-12-05T19:49:58.049Z] Copying: 681/1024 [MB] (681 MBps) [2024-12-05T19:49:59.425Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:31:40.419 00:31:40.419 19:49:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:40.419 19:49:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:42.314 Validate MD5 checksum, iteration 2 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=183c759ebe648d53907ae5842fd36daf 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 183c759ebe648d53907ae5842fd36daf != \1\8\3\c\7\5\9\e\b\e\6\4\8\d\5\3\9\0\7\a\e\5\8\4\2\f\d\3\6\d\a\f ]] 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:42.314 19:50:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:42.314 [2024-12-05 19:50:01.271216] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:31:42.314 [2024-12-05 19:50:01.271334] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82109 ] 00:31:42.572 [2024-12-05 19:50:01.431840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.572 [2024-12-05 19:50:01.527851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.524  [2024-12-05T19:50:03.786Z] Copying: 692/1024 [MB] (692 MBps) [2024-12-05T19:50:04.725Z] Copying: 1024/1024 [MB] (average 678 MBps) 00:31:45.719 00:31:45.719 19:50:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:45.719 19:50:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=86f2fbbd4ef728a89cc3846547d01c13 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 86f2fbbd4ef728a89cc3846547d01c13 != \8\6\f\2\f\b\b\d\4\e\f\7\2\8\a\8\9\c\c\3\8\4\6\5\4\7\d\0\1\c\1\3 ]] 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 82011 ]] 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 82011 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 82011 ']' 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 82011 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82011 00:31:48.245 killing process with pid 82011 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82011' 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 82011 00:31:48.245 19:50:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 82011 00:31:48.505 [2024-12-05 19:50:07.422748] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:48.505 [2024-12-05 19:50:07.433442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.433487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:48.505 [2024-12-05 19:50:07.433498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:48.505 [2024-12-05 19:50:07.433504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.433522] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:48.505 [2024-12-05 19:50:07.435591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.435618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:48.505 [2024-12-05 19:50:07.435630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.058 ms 00:31:48.505 [2024-12-05 19:50:07.435636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.435838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.435852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:48.505 [2024-12-05 19:50:07.435859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.183 ms 00:31:48.505 [2024-12-05 19:50:07.435865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.436909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.436931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:48.505 [2024-12-05 19:50:07.436939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.030 ms 00:31:48.505 [2024-12-05 19:50:07.436948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.437839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.437860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:48.505 [2024-12-05 19:50:07.437867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.867 ms 00:31:48.505 [2024-12-05 19:50:07.437873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.445423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.445454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:48.505 [2024-12-05 19:50:07.445462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.522 ms 00:31:48.505 [2024-12-05 19:50:07.445473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.449464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.449492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:48.505 [2024-12-05 19:50:07.449501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.963 ms 00:31:48.505 [2024-12-05 19:50:07.449508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.449586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.449595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:48.505 [2024-12-05 19:50:07.449602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:31:48.505 [2024-12-05 19:50:07.449611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.457066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.457096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:48.505 [2024-12-05 19:50:07.457104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.442 ms 00:31:48.505 [2024-12-05 19:50:07.457110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.465871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.465901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:48.505 [2024-12-05 19:50:07.465910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.720 ms 00:31:48.505 [2024-12-05 19:50:07.465916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.473054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.473084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:48.505 [2024-12-05 19:50:07.473092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.111 ms 00:31:48.505 [2024-12-05 19:50:07.473098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.479860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.505 [2024-12-05 19:50:07.479890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:48.505 [2024-12-05 19:50:07.479897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.702 ms 00:31:48.505 [2024-12-05 19:50:07.479902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.505 [2024-12-05 19:50:07.479928] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:48.505 [2024-12-05 19:50:07.479940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:48.505 [2024-12-05 19:50:07.479949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:48.505 [2024-12-05 19:50:07.479955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:48.505 [2024-12-05 19:50:07.479962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:48.505 [2024-12-05 19:50:07.479968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:48.505 [2024-12-05 19:50:07.479974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:48.505 [2024-12-05 19:50:07.479980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:48.505 [2024-12-05 19:50:07.479986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:48.505 [2024-12-05 19:50:07.479992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:48.505 [2024-12-05 19:50:07.479998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:48.505 [2024-12-05 19:50:07.480004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:48.505 [2024-12-05 19:50:07.480010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:48.505 [2024-12-05 19:50:07.480015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:48.506 [2024-12-05 19:50:07.480021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:48.506 [2024-12-05 19:50:07.480027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:48.506 [2024-12-05 19:50:07.480033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:48.506 [2024-12-05 19:50:07.480039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:48.506 [2024-12-05 19:50:07.480045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:48.506 [2024-12-05 19:50:07.480052] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:48.506 [2024-12-05 19:50:07.480058] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 30d6ecf8-6f20-4111-8db2-0a3086068cce 00:31:48.506 [2024-12-05 19:50:07.480064] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:48.506 [2024-12-05 19:50:07.480070] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:48.506 [2024-12-05 19:50:07.480076] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:48.506 [2024-12-05 19:50:07.480082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:48.506 [2024-12-05 19:50:07.480087] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:48.506 [2024-12-05 19:50:07.480093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:48.506 [2024-12-05 19:50:07.480104] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:48.506 [2024-12-05 19:50:07.480109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:48.506 [2024-12-05 19:50:07.480114] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:48.506 [2024-12-05 19:50:07.480120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.506 [2024-12-05 19:50:07.480142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:48.506 [2024-12-05 19:50:07.480150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:31:48.506 [2024-12-05 19:50:07.480156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.506 [2024-12-05 19:50:07.489891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.506 [2024-12-05 19:50:07.489923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:48.506 [2024-12-05 19:50:07.489933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.721 ms 00:31:48.506 [2024-12-05 19:50:07.489940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.506 [2024-12-05 19:50:07.490237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:48.506 [2024-12-05 19:50:07.490250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:48.506 [2024-12-05 19:50:07.490258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.268 ms 00:31:48.506 [2024-12-05 19:50:07.490263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.522913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.767 [2024-12-05 19:50:07.522958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:48.767 [2024-12-05 19:50:07.522968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.767 [2024-12-05 19:50:07.522974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.523016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.767 [2024-12-05 19:50:07.523023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:48.767 [2024-12-05 19:50:07.523029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.767 [2024-12-05 19:50:07.523034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.523112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.767 [2024-12-05 19:50:07.523120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:48.767 [2024-12-05 19:50:07.523141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.767 [2024-12-05 19:50:07.523147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.523164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.767 [2024-12-05 19:50:07.523171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:48.767 [2024-12-05 19:50:07.523177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.767 [2024-12-05 19:50:07.523183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.581879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.767 [2024-12-05 19:50:07.581923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:48.767 [2024-12-05 19:50:07.581932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.767 [2024-12-05 19:50:07.581938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.630880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.767 [2024-12-05 19:50:07.630923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:48.767 [2024-12-05 19:50:07.630933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.767 [2024-12-05 19:50:07.630939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.630999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.767 [2024-12-05 19:50:07.631007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:48.767 [2024-12-05 19:50:07.631013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.767 [2024-12-05 19:50:07.631019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.631069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.767 [2024-12-05 19:50:07.631084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:48.767 [2024-12-05 19:50:07.631090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.767 [2024-12-05 19:50:07.631096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.631183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.767 [2024-12-05 19:50:07.631191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:48.767 [2024-12-05 19:50:07.631197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.767 [2024-12-05 19:50:07.631203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.767 [2024-12-05 19:50:07.631230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.768 [2024-12-05 19:50:07.631237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:48.768 [2024-12-05 19:50:07.631245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.768 [2024-12-05 19:50:07.631251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.768 [2024-12-05 19:50:07.631279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.768 [2024-12-05 19:50:07.631286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:48.768 [2024-12-05 19:50:07.631293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.768 [2024-12-05 19:50:07.631298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.768 [2024-12-05 19:50:07.631329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:48.768 [2024-12-05 19:50:07.631339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:48.768 [2024-12-05 19:50:07.631345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:48.768 [2024-12-05 19:50:07.631351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:48.768 [2024-12-05 19:50:07.631444] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.981 ms, result 0 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:49.710 Remove shared memory files 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81751 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:49.710 00:31:49.710 real 1m29.074s 00:31:49.710 user 2m1.472s 00:31:49.710 sys 0m18.171s 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.710 19:50:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:49.710 ************************************ 00:31:49.710 END TEST ftl_upgrade_shutdown 00:31:49.710 ************************************ 00:31:49.710 19:50:08 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:49.710 19:50:08 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:49.710 19:50:08 ftl -- ftl/ftl.sh@14 -- # killprocess 75218 00:31:49.710 19:50:08 ftl -- common/autotest_common.sh@954 -- # '[' -z 75218 ']' 00:31:49.710 19:50:08 ftl -- common/autotest_common.sh@958 -- # kill -0 75218 00:31:49.710 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (75218) - No such process 00:31:49.710 Process with pid 75218 is not found 00:31:49.710 19:50:08 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 75218 is not found' 00:31:49.710 19:50:08 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:49.710 19:50:08 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=82219 00:31:49.710 19:50:08 ftl -- ftl/ftl.sh@20 -- # waitforlisten 82219 00:31:49.710 19:50:08 ftl -- common/autotest_common.sh@835 -- # '[' -z 82219 ']' 00:31:49.710 19:50:08 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:49.710 19:50:08 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.710 19:50:08 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:49.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:49.710 19:50:08 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:49.710 19:50:08 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.710 19:50:08 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:49.710 [2024-12-05 19:50:08.632364] Starting SPDK v25.01-pre git sha1 3c8001115 / DPDK 24.03.0 initialization... 00:31:49.710 [2024-12-05 19:50:08.632487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82219 ] 00:31:49.970 [2024-12-05 19:50:08.786615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.970 [2024-12-05 19:50:08.868281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.541 19:50:09 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.541 19:50:09 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:50.541 19:50:09 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:50.801 nvme0n1 00:31:50.801 19:50:09 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:50.801 19:50:09 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:50.801 19:50:09 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:51.061 19:50:09 ftl -- ftl/common.sh@28 -- # stores=480c18bb-309f-49ce-98ea-ba49f978055b 00:31:51.061 19:50:09 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:51.061 19:50:09 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 480c18bb-309f-49ce-98ea-ba49f978055b 00:31:51.322 19:50:10 ftl -- ftl/ftl.sh@23 -- # killprocess 82219 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@954 -- # '[' -z 82219 ']' 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@958 -- # kill -0 82219 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@959 -- # uname 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82219 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:51.322 killing process with pid 82219 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82219' 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@973 -- # kill 82219 00:31:51.322 19:50:10 ftl -- common/autotest_common.sh@978 -- # wait 82219 00:31:52.702 19:50:11 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:52.702 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:52.702 Waiting for block devices as requested 00:31:52.702 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:52.702 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:52.702 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:52.960 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:58.252 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:58.252 19:50:16 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:58.252 Remove shared memory files 00:31:58.252 19:50:16 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:58.252 19:50:16 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:58.252 19:50:16 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:58.252 19:50:16 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:58.252 19:50:16 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:58.252 19:50:16 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:58.252 ************************************ 00:31:58.252 END TEST ftl 00:31:58.252 ************************************ 00:31:58.252 00:31:58.252 real 10m10.179s 00:31:58.252 user 12m33.428s 00:31:58.252 sys 1m10.163s 00:31:58.252 19:50:16 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:58.252 19:50:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:58.252 19:50:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:58.252 19:50:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:58.252 19:50:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:58.252 19:50:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:58.252 19:50:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:58.252 19:50:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:58.252 19:50:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:58.252 19:50:16 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:58.252 19:50:16 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:58.252 19:50:16 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:58.252 19:50:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:58.252 19:50:16 -- common/autotest_common.sh@10 -- # set +x 00:31:58.252 19:50:16 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:58.252 19:50:16 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:58.252 19:50:16 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:58.252 19:50:16 -- common/autotest_common.sh@10 -- # set +x 00:31:59.264 INFO: APP EXITING 00:31:59.264 INFO: killing all VMs 00:31:59.264 INFO: killing vhost app 00:31:59.264 INFO: EXIT DONE 00:31:59.264 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:59.523 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:59.523 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:59.523 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:59.782 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:00.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:00.299 Cleaning 00:32:00.299 Removing: /var/run/dpdk/spdk0/config 00:32:00.299 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:00.299 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:00.299 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:00.299 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:00.299 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:00.299 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:00.299 Removing: /var/run/dpdk/spdk0 00:32:00.299 Removing: /var/run/dpdk/spdk_pid56998 00:32:00.299 Removing: /var/run/dpdk/spdk_pid57211 00:32:00.299 Removing: /var/run/dpdk/spdk_pid57429 00:32:00.299 Removing: /var/run/dpdk/spdk_pid57528 00:32:00.299 Removing: /var/run/dpdk/spdk_pid57573 00:32:00.299 Removing: /var/run/dpdk/spdk_pid57702 00:32:00.299 Removing: /var/run/dpdk/spdk_pid57720 00:32:00.299 Removing: /var/run/dpdk/spdk_pid57919 00:32:00.299 Removing: /var/run/dpdk/spdk_pid58012 00:32:00.299 Removing: /var/run/dpdk/spdk_pid58119 00:32:00.299 Removing: /var/run/dpdk/spdk_pid58230 00:32:00.299 Removing: /var/run/dpdk/spdk_pid58327 00:32:00.299 Removing: /var/run/dpdk/spdk_pid58372 00:32:00.299 Removing: /var/run/dpdk/spdk_pid58414 00:32:00.299 Removing: /var/run/dpdk/spdk_pid58479 00:32:00.299 Removing: /var/run/dpdk/spdk_pid58585 00:32:00.299 Removing: /var/run/dpdk/spdk_pid59034 00:32:00.299 Removing: /var/run/dpdk/spdk_pid59102 00:32:00.299 Removing: /var/run/dpdk/spdk_pid59160 00:32:00.299 Removing: /var/run/dpdk/spdk_pid59181 00:32:00.299 Removing: /var/run/dpdk/spdk_pid59288 00:32:00.299 Removing: /var/run/dpdk/spdk_pid59304 00:32:00.299 Removing: /var/run/dpdk/spdk_pid59436 00:32:00.300 Removing: /var/run/dpdk/spdk_pid59452 00:32:00.300 Removing: /var/run/dpdk/spdk_pid59516 00:32:00.300 Removing: /var/run/dpdk/spdk_pid59534 00:32:00.300 Removing: /var/run/dpdk/spdk_pid59587 00:32:00.300 Removing: /var/run/dpdk/spdk_pid59616 00:32:00.300 Removing: /var/run/dpdk/spdk_pid59776 00:32:00.300 Removing: /var/run/dpdk/spdk_pid59812 00:32:00.300 Removing: /var/run/dpdk/spdk_pid59896 00:32:00.300 Removing: /var/run/dpdk/spdk_pid60074 00:32:00.300 Removing: /var/run/dpdk/spdk_pid60152 00:32:00.300 Removing: /var/run/dpdk/spdk_pid60194 00:32:00.300 Removing: /var/run/dpdk/spdk_pid60623 00:32:00.300 Removing: /var/run/dpdk/spdk_pid60716 00:32:00.300 Removing: /var/run/dpdk/spdk_pid60825 00:32:00.300 Removing: /var/run/dpdk/spdk_pid60879 00:32:00.300 Removing: /var/run/dpdk/spdk_pid60899 00:32:00.300 Removing: /var/run/dpdk/spdk_pid60983 00:32:00.300 Removing: /var/run/dpdk/spdk_pid61602 00:32:00.300 Removing: /var/run/dpdk/spdk_pid61638 00:32:00.300 Removing: /var/run/dpdk/spdk_pid62107 00:32:00.300 Removing: /var/run/dpdk/spdk_pid62205 00:32:00.300 Removing: /var/run/dpdk/spdk_pid62320 00:32:00.300 Removing: /var/run/dpdk/spdk_pid62373 00:32:00.300 Removing: /var/run/dpdk/spdk_pid62393 00:32:00.300 Removing: /var/run/dpdk/spdk_pid62424 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64257 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64383 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64392 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64405 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64450 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64454 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64466 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64512 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64516 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64528 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64573 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64577 00:32:00.300 Removing: /var/run/dpdk/spdk_pid64589 00:32:00.300 Removing: /var/run/dpdk/spdk_pid65969 00:32:00.300 Removing: /var/run/dpdk/spdk_pid66066 00:32:00.300 Removing: /var/run/dpdk/spdk_pid67468 00:32:00.300 Removing: /var/run/dpdk/spdk_pid69213 00:32:00.300 Removing: /var/run/dpdk/spdk_pid69287 00:32:00.300 Removing: /var/run/dpdk/spdk_pid69363 00:32:00.300 Removing: /var/run/dpdk/spdk_pid69473 00:32:00.300 Removing: /var/run/dpdk/spdk_pid69563 00:32:00.300 Removing: /var/run/dpdk/spdk_pid69660 00:32:00.300 Removing: /var/run/dpdk/spdk_pid69733 00:32:00.300 Removing: /var/run/dpdk/spdk_pid69805 00:32:00.300 Removing: /var/run/dpdk/spdk_pid69918 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70012 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70102 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70176 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70257 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70361 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70453 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70549 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70623 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70698 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70801 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70894 00:32:00.300 Removing: /var/run/dpdk/spdk_pid70984 00:32:00.300 Removing: /var/run/dpdk/spdk_pid71058 00:32:00.300 Removing: /var/run/dpdk/spdk_pid71135 00:32:00.300 Removing: /var/run/dpdk/spdk_pid71211 00:32:00.300 Removing: /var/run/dpdk/spdk_pid71285 00:32:00.300 Removing: /var/run/dpdk/spdk_pid71388 00:32:00.559 Removing: /var/run/dpdk/spdk_pid71479 00:32:00.559 Removing: /var/run/dpdk/spdk_pid71568 00:32:00.559 Removing: /var/run/dpdk/spdk_pid71642 00:32:00.559 Removing: /var/run/dpdk/spdk_pid71722 00:32:00.559 Removing: /var/run/dpdk/spdk_pid71795 00:32:00.559 Removing: /var/run/dpdk/spdk_pid71866 00:32:00.559 Removing: /var/run/dpdk/spdk_pid71975 00:32:00.559 Removing: /var/run/dpdk/spdk_pid72060 00:32:00.559 Removing: /var/run/dpdk/spdk_pid72213 00:32:00.559 Removing: /var/run/dpdk/spdk_pid72492 00:32:00.559 Removing: /var/run/dpdk/spdk_pid72524 00:32:00.559 Removing: /var/run/dpdk/spdk_pid72977 00:32:00.559 Removing: /var/run/dpdk/spdk_pid73155 00:32:00.559 Removing: /var/run/dpdk/spdk_pid73259 00:32:00.559 Removing: /var/run/dpdk/spdk_pid73368 00:32:00.559 Removing: /var/run/dpdk/spdk_pid73416 00:32:00.559 Removing: /var/run/dpdk/spdk_pid73436 00:32:00.559 Removing: /var/run/dpdk/spdk_pid73745 00:32:00.559 Removing: /var/run/dpdk/spdk_pid73804 00:32:00.559 Removing: /var/run/dpdk/spdk_pid73877 00:32:00.559 Removing: /var/run/dpdk/spdk_pid74269 00:32:00.559 Removing: /var/run/dpdk/spdk_pid74417 00:32:00.559 Removing: /var/run/dpdk/spdk_pid75218 00:32:00.559 Removing: /var/run/dpdk/spdk_pid75350 00:32:00.559 Removing: /var/run/dpdk/spdk_pid75525 00:32:00.559 Removing: /var/run/dpdk/spdk_pid75622 00:32:00.559 Removing: /var/run/dpdk/spdk_pid75919 00:32:00.559 Removing: /var/run/dpdk/spdk_pid76161 00:32:00.559 Removing: /var/run/dpdk/spdk_pid76497 00:32:00.559 Removing: /var/run/dpdk/spdk_pid76702 00:32:00.559 Removing: /var/run/dpdk/spdk_pid76850 00:32:00.559 Removing: /var/run/dpdk/spdk_pid76908 00:32:00.559 Removing: /var/run/dpdk/spdk_pid77007 00:32:00.559 Removing: /var/run/dpdk/spdk_pid77032 00:32:00.559 Removing: /var/run/dpdk/spdk_pid77085 00:32:00.559 Removing: /var/run/dpdk/spdk_pid77240 00:32:00.559 Removing: /var/run/dpdk/spdk_pid77449 00:32:00.559 Removing: /var/run/dpdk/spdk_pid77721 00:32:00.559 Removing: /var/run/dpdk/spdk_pid78005 00:32:00.559 Removing: /var/run/dpdk/spdk_pid78296 00:32:00.559 Removing: /var/run/dpdk/spdk_pid79304 00:32:00.559 Removing: /var/run/dpdk/spdk_pid79456 00:32:00.559 Removing: /var/run/dpdk/spdk_pid79550 00:32:00.559 Removing: /var/run/dpdk/spdk_pid79981 00:32:00.559 Removing: /var/run/dpdk/spdk_pid80048 00:32:00.559 Removing: /var/run/dpdk/spdk_pid80352 00:32:00.559 Removing: /var/run/dpdk/spdk_pid80633 00:32:00.559 Removing: /var/run/dpdk/spdk_pid81195 00:32:00.559 Removing: /var/run/dpdk/spdk_pid81331 00:32:00.559 Removing: /var/run/dpdk/spdk_pid81373 00:32:00.559 Removing: /var/run/dpdk/spdk_pid81437 00:32:00.559 Removing: /var/run/dpdk/spdk_pid81494 00:32:00.559 Removing: /var/run/dpdk/spdk_pid81552 00:32:00.559 Removing: /var/run/dpdk/spdk_pid81751 00:32:00.559 Removing: /var/run/dpdk/spdk_pid81821 00:32:00.559 Removing: /var/run/dpdk/spdk_pid81944 00:32:00.559 Removing: /var/run/dpdk/spdk_pid82011 00:32:00.559 Removing: /var/run/dpdk/spdk_pid82040 00:32:00.559 Removing: /var/run/dpdk/spdk_pid82109 00:32:00.559 Removing: /var/run/dpdk/spdk_pid82219 00:32:00.559 Clean 00:32:00.559 19:50:19 -- common/autotest_common.sh@1453 -- # return 0 00:32:00.559 19:50:19 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:00.559 19:50:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.559 19:50:19 -- common/autotest_common.sh@10 -- # set +x 00:32:00.559 19:50:19 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:00.559 19:50:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.559 19:50:19 -- common/autotest_common.sh@10 -- # set +x 00:32:00.559 19:50:19 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:00.559 19:50:19 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:00.559 19:50:19 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:00.559 19:50:19 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:00.559 19:50:19 -- spdk/autotest.sh@398 -- # hostname 00:32:00.559 19:50:19 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:00.818 geninfo: WARNING: invalid characters removed from testname! 00:32:27.353 19:50:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:27.353 19:50:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:28.285 19:50:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:30.814 19:50:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:33.341 19:50:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:35.286 19:50:53 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:37.196 19:50:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:37.196 19:50:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:37.197 19:50:56 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:37.197 19:50:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:37.197 19:50:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:37.197 19:50:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:37.197 + [[ -n 5020 ]] 00:32:37.197 + sudo kill 5020 00:32:37.204 [Pipeline] } 00:32:37.220 [Pipeline] // timeout 00:32:37.225 [Pipeline] } 00:32:37.241 [Pipeline] // stage 00:32:37.247 [Pipeline] } 00:32:37.261 [Pipeline] // catchError 00:32:37.269 [Pipeline] stage 00:32:37.271 [Pipeline] { (Stop VM) 00:32:37.284 [Pipeline] sh 00:32:37.560 + vagrant halt 00:32:40.084 ==> default: Halting domain... 00:32:43.440 [Pipeline] sh 00:32:43.718 + vagrant destroy -f 00:32:46.246 ==> default: Removing domain... 00:32:46.822 [Pipeline] sh 00:32:47.098 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:32:47.106 [Pipeline] } 00:32:47.123 [Pipeline] // stage 00:32:47.129 [Pipeline] } 00:32:47.144 [Pipeline] // dir 00:32:47.149 [Pipeline] } 00:32:47.165 [Pipeline] // wrap 00:32:47.173 [Pipeline] } 00:32:47.186 [Pipeline] // catchError 00:32:47.196 [Pipeline] stage 00:32:47.199 [Pipeline] { (Epilogue) 00:32:47.214 [Pipeline] sh 00:32:47.490 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:52.811 [Pipeline] catchError 00:32:52.813 [Pipeline] { 00:32:52.826 [Pipeline] sh 00:32:53.104 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:53.104 Artifacts sizes are good 00:32:53.112 [Pipeline] } 00:32:53.126 [Pipeline] // catchError 00:32:53.137 [Pipeline] archiveArtifacts 00:32:53.147 Archiving artifacts 00:32:53.242 [Pipeline] cleanWs 00:32:53.254 [WS-CLEANUP] Deleting project workspace... 00:32:53.255 [WS-CLEANUP] Deferred wipeout is used... 00:32:53.260 [WS-CLEANUP] done 00:32:53.262 [Pipeline] } 00:32:53.277 [Pipeline] // stage 00:32:53.283 [Pipeline] } 00:32:53.306 [Pipeline] // node 00:32:53.311 [Pipeline] End of Pipeline 00:32:53.364 Finished: SUCCESS